335 research outputs found

    Secure data storage and retrieval in cloud computing

    Get PDF
    Nowadays cloud computing has been widely recognised as one of the most inuential information technologies because of its unprecedented advantages. In spite of its widely recognised social and economic benefits, in cloud computing customers lose the direct control of their data and completely rely on the cloud to manage their data and computation, which raises significant security and privacy concerns and is one of the major barriers to the adoption of public cloud by many organisations and individuals. Therefore, it is desirable to apply practical security approaches to address the security risks for the wide adoption of cloud computing

    A secure privacy preserving deduplication scheme for cloud computing

    Full text link
    © 2019 Elsevier B.V. Data deduplication is a key technique to improve storage efficiency in cloud computing. By pointing redundant files to a single copy, cloud service providers greatly reduce their storage space as well as data transfer costs. Despite of the fact that the traditional deduplication approach has been adopted widely, it comes with a high risk of losing data confidentiality because of the data storage models in cloud computing. To deal with this issue in cloud storage, we first propose a TEE (trusted execution environment) based secure deduplication scheme. In our scheme, each cloud user is assigned a privilege set; the deduplication can be performed if and only if the cloud users have the correct privilege. Moreover, our scheme augments the convergent encryption with users’ privileges and relies on TEE to provide secure key management, which improves the ability of such cryptosystem to resist chosen plaintext attacks and chosen ciphertext attacks. A security analysis indicates that our scheme is secure enough to support data deduplication and to protect the confidentiality of sensitive data. Furthermore, we implement a prototype of our scheme and evaluate the performance of our prototype, experiments show that the overhead of our scheme is practical in realistic environments

    Flexible Yet Secure De-Duplication Service for Enterprise Data on Cloud Storage

    Get PDF
    The cloud storage services bring forth infinite storage capacity and flexible access capability to store and share large-scale content. The convenience brought forth has attracted both individual and enterprise users to outsource data service to a cloud provider. As the survey shows 56% of the usages of cloud storage applications are for data back up and up to 68% of data backup are user assets. Enterprise tenants would need to protect their data privacy before uploading them to the cloud and expect a reasonable performance while they try to reduce the operation cost in terms of cloud storage, capacity and I/Os matter as well as systems’ performance, bandwidth and data protection. Thus, enterprise tenants demand secure and economic data storage yet flexible access on their cloud data. In this paper, we propose a secure de-duplication solution for enterprise tenants to leverage the benefits of cloud storage while reducing operation cost and protecting privacy. First, the solution uses a proxy to do flexible group access control which supports secure de-duplication within a group; Second, the solution supports scalable clustering of proxies to support large-scale data access; Third, the solution can be integrated with cloud storage seamlessly. We implemented and tested our solution by integrating it with Dropbox. Secure de-duplication in a group is performed at low data transfer latency and small storage overhead as compared to de-duplication on plaintext

    A data preparation approach for cloud storage based on containerized parallel patterns

    Get PDF
    In this paper, we present the design, implementation, and evaluation of an efficient data preparation and retrieval approach for cloud storage. The approach includes a deduplication subsystem that indexes the hash of each content to identify duplicated data. As a consequence, avoiding duplicated content reduces reprocessing time during uploads and other costs related to outsource data management tasks. Our proposed data preparation scheme enables organizations to add properties such as security, reliability, and cost-efficiency to their contents before sending them to the cloud. It also creates recovery schemes for organizations to share preprocessed contents with partners and end-users. The approach also includes an engine that encapsulates preprocessing applications into virtual containers (VCs) to create parallel patterns that improve the efficiency of data preparation retrieval process. In a study case, real repositories of satellite images, and organizational files were prepared to be migrated to the cloud by using processes such as compression, encryption, encoding for fault tolerance, and access control. The experimental evaluation revealed the feasibility of using a data preparation approach for organizations to mitigate risks that still could arise in the cloud. It also revealed the efficiency of the deduplication process to reduce data preparation tasks and the efficacy of parallel patterns to improve the end-user service experience.This research was supported by "Fondo Sectorial de Investigación para la Educación";, SEP-CONACyT Mexico, through projects 281565 and 285276

    A federated content distribution system to build health data synchronization services

    Get PDF
    In organizational environments, such as in hospitals, data have to be processed, preserved, and shared with other organizations in a cost-efficient manner. Moreover, organizations have to accomplish different mandatory non-functional requirements imposed by the laws, protocols, and norms of each country. In this context, this paper presents a Federated Content Distribution System to build infrastructure-agnostic health data synchronization services. In this federation, each hospital manages local and federated services based on a pub/sub model. The local services manage users and contents (i.e., medical imagery) inside the hospital, whereas federated services allow the cooperation of different hospitals sharing resources and data. Data preparation schemes were implemented to add non-functional requirements to data. Moreover, data published in the content distribution system are automatically synchronized to all users subscribed to the catalog where the content was published.This work has been partially supported by the grant “CABAHLA-CM: Convergencia Big data-Hpc: de Los sensores a las Aplicaciones” (Ref: S2018/TCS-4423) of Madrid Regional Government; the Spanish Ministry of Science and Innovation Project ” New Data Intensive Computing Methods for High-End and Edge Computing Platforms (DECIDE)”. Ref. PID2019-107858GB-I00; and by the project 41756 “Plataforma tecnológica para la gestión, aseguramiento, intercambio y preservación de grandes volúmenes de datos en salud y construcción de un repositorio nacional de servicios de análisis de datos de salud” by the FORDECYT-PRONACES

    DDEAS: Distributed Deduplication System with Efficient Access in Cloud Data Storage

    Get PDF
    Cloud storage service is one of the vital function of cloud computing that helps cloud users to outsource a massive volume of data without upgrading their devices. However, cloud data storage offered by Cloud Service Providers (CSPs) faces data redundancy problems. The data de-duplication technique aims to eliminate redundant data segments and keeps a single instance of the data set, even if similar data set is owned by any number of users. Since data blocks are distributed among the multiple individual servers, the user needs to download each block of the file before reconstructing the file, which reduces the system efficiency. We propose a server level data recover module in the cloud storage system to improve file access efficiency and reduce network bandwidth utilization time. In the proposed method, erasure coding is used to store blocks in distributed cloud storage and The MD5 (Message Digest 5) is used for data integrity. Executing recover algorithm helps user to directly fetch the file without downloading each block from the cloud servers. The proposed scheme improves the time efficiency of the system and quick access ability to the stored data. Thus consumes less network bandwidth and

    A New Scheme for Removing Duplicate Files from Smart Mobile Devices

    Get PDF
    The continuous development of the information technology and mobile communication world and the potentials available in the smart devices make these devices widely used in daily life. The mobile applications with the internet are distinguished simple, essay to use in any time/anywhere, communication between relatives and friends in different places in the world. The social application networks make these devices received several of the duplicate files daily which lead to many drawbacks such inefficient use of storage, low performance of CPU, RAM, and increasing consumption battery. In this paper, we present a good scheme to remove from the duplicate files, and we focus on image files as a common case in social apps. Our work overcomes on the above-mentioned issues and focuses to use hash function and Huffman code to build unique code for each image. Our experiments improve the performance from 1046770, 1995808 ns to 950000, and 1981154 ns in Galaxy and HUAWEI, respectively. In the storage side, the proposed scheme saves storage space from 1.9 GB, 1.24 GB to 2 GB, and 1.54 GB, respectively
    corecore