926 research outputs found

    Efficient integrity verification of replicated data in cloud

    Get PDF
    The cloud computing is an emerging model in which computing infrastructure resources are provided as a service over the Internet. Data owners can outsource their data by remotely storing them in the cloud and enjoy on-demand high quality services from a shared pool of configurable computing resources. By using these data storage services, the data owners can relieve the burden of local data storage and maintenance. However, since data owners and the cloud servers are not in the same trusted domain, the outsourced data may be at risk as the cloud server may no longer be fully trusted. Therefore, data integrity is of critical importance in such a scenario. Cloud should let the owners or a trusted third party to check for the integrity of their data storage without demanding a local copy of the data. Owners often replicate their data on the cloud servers across multiple data centers to provide a higher level of scalability, availability, and durability. When the data owners ask the Cloud Service Provider (CSP) to replicate data, they are charged a higher storage fee by the CSP. Therefore, the data owners need to be strongly convinced that the CSP is storing data copies agreed on in the service level contract, and data-updates have been correctly executed on all the remotely stored copies. In this thesis, a Dynamic Multi-Replica Provable Data Possession scheme (DMR-PDP) is proposed that prevents the CSP from cheating; for example, by maintaining fewer copies than paid for and/or tampering data. In addition, we also extended the scheme to support a basic file versioning system where only the difference between the original file and the updated file is propagated rather than the propagation of operations for privacy reasons. DMR-PDP also supports efficient dynamic operations like block modification, insertion and deletion on replicas over the cloud servers --Abstract, page iii

    Recoganisation and security guidance of data integrity in cloud storage

    Get PDF
    Cloud computing has been envisioned as the de-facto solution to the rising storage costs of IT Enterprises. With the high costs of data storage devices as well as the rapid rate at which data is being generated it proves costly for enterprises or individual users to frequently update their hardware. Storage outsourcing of data to cloud storage helps such firms by reducing the costs of storage, maintenance and personnel. It can also assure a reliable storage of important data by keeping multiple copies of the data thereby reducing the chance of losing data by hardware failures. The study deal with the problem of implementing a protocol for obtaining a proof of data possession in the cloud sometimes referred to as Proof of irretrievability (POR). The problem tries to obtain and verify a proof that the data that is stored by a user at remote data storage in the cloud (called cloud storage archives or simply archives) is not modified by the archive and thereby the integrity of the data is assured. The verification systems prevent the cloud storage archives from misrepresenting or modifying the data stored at it without the consent of the data owner by using frequent checks on the storage archives.&nbsp

    Secure and Privacy-Preserving Cloud-Assisted Computing

    Get PDF
    Smart devices such as smartphones, wearables, and smart appliances collect significant amounts of data and transmit them over the network forming the Internet of Things (IoT). Many applications in our daily lives (e.g., health, smart grid, traffic monitoring) involve IoT devices that often have low computational capabilities. Subsequently, powerful cloud servers are employed to process the data collected from these devices. Nevertheless, security and privacy concerns arise in cloud-assisted computing settings. Collected data can be sensitive, and it is essential to protect their confidentiality. Additionally, outsourcing computations to untrusted cloud servers creates the need to ensure that servers perform the computations as requested and that any misbehavior can be detected, safeguarding security. Cryptographic primitives and protocols are the foundation to design secure and privacy-preserving solutions that address these challenges. This thesis focuses on providing privacy and security guarantees when outsourcing heavy computations on sensitive data to untrusted cloud servers. More concretely, this work: (a) \ua0provides solutions for outsourcing the secure computation of the sum and the product functions in the multi-server, multi-client setting, protecting the sensitive data of the data owners, even against potentially untrusted cloud servers; (b) \ua0provides integrity guarantees for the proposed protocols, by enabling anyone to verify the correctness of the computed function values. More precisely, the employed servers or the clients (depending on the proposed solution) provide specific values which are the proofs that the computed results are correct; (c) \ua0designs decentralized settings, where multiple cloud servers are employed to perform the requested computations as opposed to relying on a single server that might fail or lose connection; (d) \ua0suggests ways to protect individual privacy and provide integrity. More pre- cisely, we propose a verifiable differentially private solution that provides verifiability and avoids any leakage of information regardless of the participa- tion of some individual’s sensitive data in the computation or not

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system

    Novel Proposed Work for Empirical Word Searching in Cloud Environment

    Get PDF
    People's lives have become much more convenient as a result of the development of cloud storage. The third-party server has received a lot of data from many people and businesses for storage. Therefore, it is necessary to ensure that the user's data is protected from prying eyes. In the cloud environment, searchable encryption technology is used to protect user information when retrieving data. The versatility of the scheme is, however, constrained by the fact that the majority of them only offer single-keyword searches and do not permit file changes.A novel empirical multi-keyword search in the cloud environment technique is offered as a solution to these issues. Additionally, it prevents the involvement of a third party in the transaction between data holder and user and guarantees integrity. Our system achieves authenticity at the data storage stage by numbering the files, verifying that the user receives a complete ciphertext. Our technique outperforms previous analogous schemes in terms of security and performance and is resistant to inside keyword guessing attacks.The server cannot detect if the same set of keywords is being looked for by several queries because our system generates randomized search queries. Both the number of keywords in a search query and the number of keywords in an encrypted document can be hidden. Our searchable encryption method is effective and protected from the adaptive chosen keywords threat at the same time

    Remote Data Integrity Checking in Cloud Computing

    Get PDF
    Cloud computing is an internet based computing which enables sharing of services. It is very challenging part to keep safely all required data that are needed in many applica f or user in cloud. Storing our data in cloud may not be fully trustworthy. Since client doesnt have copy of all stored data, he has to depend on Cloud Service Provider. This work studies the problem of ensuring the integrity and security of data storage in Cloud Computing. This paper, proposes an effective and flexible Batch Audit sche me with dynamic data support to reduce the computation overheads. To ensure the correctness of users data the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the data stored in the cloud. We consider symmetric encryption for effective utilization of outsourced cloud data under the model, it achieve the storage security in multi cloud data storage. The new scheme further supports secure and efficient dynamic operation sondata blocks, including data i nserti on, update,delete and replacement. Extensive securityand performance analysis shows that the proposed sche me is highlyef ficient and resilient again st By zantinef ailure, maliciousd a ta modification at tack, and even server colliding a ttacks
    • …
    corecore