532 research outputs found

    Dynamic Provable Data Possession Protocols with Public Verifiability and Data Privacy

    Full text link
    Cloud storage services have become accessible and used by everyone. Nevertheless, stored data are dependable on the behavior of the cloud servers, and losses and damages often occur. One solution is to regularly audit the cloud servers in order to check the integrity of the stored data. The Dynamic Provable Data Possession scheme with Public Verifiability and Data Privacy presented in ACISP'15 is a straightforward design of such solution. However, this scheme is threatened by several attacks. In this paper, we carefully recall the definition of this scheme as well as explain how its security is dramatically menaced. Moreover, we proposed two new constructions for Dynamic Provable Data Possession scheme with Public Verifiability and Data Privacy based on the scheme presented in ACISP'15, one using Index Hash Tables and one based on Merkle Hash Trees. We show that the two schemes are secure and privacy-preserving in the random oracle model.Comment: ISPEC 201

    State of The Art and Hot Aspects in Cloud Data Storage Security

    Get PDF
    Along with the evolution of cloud computing and cloud storage towards matu- rity, researchers have analyzed an increasing range of cloud computing security aspects, data security being an important topic in this area. In this paper, we examine the state of the art in cloud storage security through an overview of selected peer reviewed publications. We address the question of defining cloud storage security and its different aspects, as well as enumerate the main vec- tors of attack on cloud storage. The reviewed papers present techniques for key management and controlled disclosure of encrypted data in cloud storage, while novel ideas regarding secure operations on encrypted data and methods for pro- tection of data in fully virtualized environments provide a glimpse of the toolbox available for securing cloud storage. Finally, new challenges such as emergent government regulation call for solutions to problems that did not receive enough attention in earlier stages of cloud computing, such as for example geographical location of data. The methods presented in the papers selected for this review represent only a small fraction of the wide research effort within cloud storage security. Nevertheless, they serve as an indication of the diversity of problems that are being addressed

    Keyword-Based Delegable Proofs of Storage

    Full text link
    Cloud users (clients) with limited storage capacity at their end can outsource bulk data to the cloud storage server. A client can later access her data by downloading the required data files. However, a large fraction of the data files the client outsources to the server is often archival in nature that the client uses for backup purposes and accesses less frequently. An untrusted server can thus delete some of these archival data files in order to save some space (and allocate the same to other clients) without being detected by the client (data owner). Proofs of storage enable the client to audit her data files uploaded to the server in order to ensure the integrity of those files. In this work, we introduce one type of (selective) proofs of storage that we call keyword-based delegable proofs of storage, where the client wants to audit all her data files containing a specific keyword (e.g., "important"). Moreover, it satisfies the notion of public verifiability where the client can delegate the auditing task to a third-party auditor who audits the set of files corresponding to the keyword on behalf of the client. We formally define the security of a keyword-based delegable proof-of-storage protocol. We construct such a protocol based on an existing proof-of-storage scheme and analyze the security of our protocol. We argue that the techniques we use can be applied atop any existing publicly verifiable proof-of-storage scheme for static data. Finally, we discuss the efficiency of our construction.Comment: A preliminary version of this work has been published in International Conference on Information Security Practice and Experience (ISPEC 2018

    Efficient integrity verification of replicated data in cloud

    Get PDF
    The cloud computing is an emerging model in which computing infrastructure resources are provided as a service over the Internet. Data owners can outsource their data by remotely storing them in the cloud and enjoy on-demand high quality services from a shared pool of configurable computing resources. By using these data storage services, the data owners can relieve the burden of local data storage and maintenance. However, since data owners and the cloud servers are not in the same trusted domain, the outsourced data may be at risk as the cloud server may no longer be fully trusted. Therefore, data integrity is of critical importance in such a scenario. Cloud should let the owners or a trusted third party to check for the integrity of their data storage without demanding a local copy of the data. Owners often replicate their data on the cloud servers across multiple data centers to provide a higher level of scalability, availability, and durability. When the data owners ask the Cloud Service Provider (CSP) to replicate data, they are charged a higher storage fee by the CSP. Therefore, the data owners need to be strongly convinced that the CSP is storing data copies agreed on in the service level contract, and data-updates have been correctly executed on all the remotely stored copies. In this thesis, a Dynamic Multi-Replica Provable Data Possession scheme (DMR-PDP) is proposed that prevents the CSP from cheating; for example, by maintaining fewer copies than paid for and/or tampering data. In addition, we also extended the scheme to support a basic file versioning system where only the difference between the original file and the updated file is propagated rather than the propagation of operations for privacy reasons. DMR-PDP also supports efficient dynamic operations like block modification, insertion and deletion on replicas over the cloud servers --Abstract, page iii

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system

    A Novel Scheme For Preserving Owner Privacy And Verifying Data Integrity Of Shared Data In Public Cloud Storage

    Get PDF
    With the emergence of cloud Technologies, it is important review the integrity of the information that is saved on the public cloud storage systems. When critical and private information which is very sensitive in nature is saved and shared with many uses, it is very much important to safeguard the details of the data owner from the auditor. It means that the auditor should not be able to get any details about the data owner while he is auditing are reviewing the cloud data. Many schemes were proposed that safeguard the user privacy while incorporating the confirmable information possession technique. However, the issue with these schemes is that they have heavy computational cost and they increase the load on the systems and in turn bring down the efficiency. To address all the above mentioned problems, we propose a novel and unique technique to up all the data owner's privacy while auditing the data. The architecture of our proposed scheme is based on identity supported encryption and hence it overcomes the problem of management of certificates and ensures the relation between the data owner and the uploaded data are not exposed to the auditor by encrypting the data  in the proof generation phase but not in the auditing phase. The encryption is also done at block level to safeguard the data from untrusted Cloud Service Provider. In this manner, our scheme provides maximum security from the Cloud Service Provider and the auditor and hence the privacy and anonymity of data is preserved in this mechanism. Experimental results show that our mechanism is effective, efficient and implementable when compared to to the existing systems

    Recoganisation and security guidance of data integrity in cloud storage

    Get PDF
    Cloud computing has been envisioned as the de-facto solution to the rising storage costs of IT Enterprises. With the high costs of data storage devices as well as the rapid rate at which data is being generated it proves costly for enterprises or individual users to frequently update their hardware. Storage outsourcing of data to cloud storage helps such firms by reducing the costs of storage, maintenance and personnel. It can also assure a reliable storage of important data by keeping multiple copies of the data thereby reducing the chance of losing data by hardware failures. The study deal with the problem of implementing a protocol for obtaining a proof of data possession in the cloud sometimes referred to as Proof of irretrievability (POR). The problem tries to obtain and verify a proof that the data that is stored by a user at remote data storage in the cloud (called cloud storage archives or simply archives) is not modified by the archive and thereby the integrity of the data is assured. The verification systems prevent the cloud storage archives from misrepresenting or modifying the data stored at it without the consent of the data owner by using frequent checks on the storage archives.&nbsp

    Secure Auditing and Maintaining Block Level Integrity with Reliability of Data in Cloud

    Get PDF
    Cloud storage systems are becoming increasingly popular and popular and the cloud computing is getting enhance day by day it needs to provide more security with secure auditing. For storing large and large amount of data in cloud, requires more space and data can be replicated which will increase the space and cost too unnecessarily. To avoid this deduplication needs to be done. So , in this paper, pondering the main issue of honesty and secure deduplication on cloud information. Specifically, going for achieving both information uprightness as well as deduplication in cloud. And in this paper, proposing the algorithm which will audit securely and provide block level deduplication as well as it will maintain reliability of data in clou
    • …
    corecore