High Performance Distributed Big File Cloud Storage
Loading...
Issue Date
2016-05-01
Authors
Shakelli, Anusha
Sengupta, Sam; Adviser
White, Joshua; Reviewer
Publisher
Keywords
cloud storage , metadata complexity , distributed data cloud storage
Abstract
Cloud storage services are growing at a fast rate and are emerging in data storage field. These services are used by people for backing up data, sharing file through social networks like Facebook [3], Zing Me [2]. Users will be able to upload data from computer, mobile or tablet and also download and share them to others. Thus, system load in cloud storage becomes huge. Nowadays, Cloud storage service has become a crucial requirement for many enterprises due to its features like cost saving, performance, security, flexibility.
To design an efficient storage engine for cloud based systems, it is always required to deal with requirements like big file processing, lightweight metadata, deduplication, high scalability. Here we suggest a Big file cloud architecture to handle all problems in big file cloud system. Basically, here we propose to build a scalable distributed data cloud storage that supports big file with size up to several terabytes.
In cloud storage, system load is usually heavy. Data deduplication to reduce wastage of storage space caused by storing same static data from different users. In order to solve the above problems, a common method used in Cloud storages, is by dividing big file into small blocks, storing them on disks and then dealing them using a metadata system [1], [6], [19],
[20]. Current cloud storage services have a complex metadata system. Thereby, the space complexity of the metadata System is O(n) and it is not scalable for big file. In this research, a new big file cloud storage architecture and a better solution to reduce the space complexity of metadata is suggested.