5,491 research outputs found

    An extensive research survey on data integrity and deduplication towards privacy in cloud storage

    Get PDF
    Owing to the highly distributed nature of the cloud storage system, it is one of the challenging tasks to incorporate a higher degree of security towards the vulnerable data. Apart from various security concerns, data privacy is still one of the unsolved problems in this regards. The prime reason is that existing approaches of data privacy doesn't offer data integrity and secure data deduplication process at the same time, which is highly essential to ensure a higher degree of resistance against all form of dynamic threats over cloud and internet systems. Therefore, data integrity, as well as data deduplication is such associated phenomena which influence data privacy. Therefore, this manuscript discusses the explicit research contribution toward data integrity, data privacy, and data deduplication. The manuscript also contributes towards highlighting the potential open research issues followed by a discussion of the possible future direction of work towards addressing the existing problems

    Identity-based remote data integrity checking with perfect data privacy preserving for cloud storage

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Remote data integrity checking (RDIC) enables a data storage server, such as a cloud server, to prove to a verifier that it is actually storing a data owner’s data honestly. To date, a number of RDIC protocols have been proposed in the literature, but almost all the constructions suffer from the issue of a complex key management, that is, they rely on the expensive public key infrastructure (PKI), which might hinder the deployment of RDIC in practice. In this paper, we propose a new construction of identity-based (ID-based) RDIC protocol by making use of key-homomorphic cryptographic primitive to reduce the system complexity and the cost for establishing and managing the public key authentication framework in PKI based RDIC schemes. We formalize ID-based RDIC and its security model including security against a malicious cloud server and zero knowledge privacy against a third party verifier. We then provide a concrete construction of ID-based RDIC scheme which leaks no information of the stored files to the verifier during the RDIC process. The new construction is proven secure against the malicious server in the generic group model and achieves zero knowledge privacy against a verifier. Extensive security analysis and implementation results demonstrate that the proposed new protocol is provably secure and practical in the real-world applications.This work is supported by the National Natural Science Foundation of China (61501333,61300213,61272436,61472083), Fok Ying Tung Education Foundation (141065), Program for New Century Excellent Talents in Fujian University (JA1406

    Dynamic Provable Data Possession Protocols with Public Verifiability and Data Privacy

    Full text link
    Cloud storage services have become accessible and used by everyone. Nevertheless, stored data are dependable on the behavior of the cloud servers, and losses and damages often occur. One solution is to regularly audit the cloud servers in order to check the integrity of the stored data. The Dynamic Provable Data Possession scheme with Public Verifiability and Data Privacy presented in ACISP'15 is a straightforward design of such solution. However, this scheme is threatened by several attacks. In this paper, we carefully recall the definition of this scheme as well as explain how its security is dramatically menaced. Moreover, we proposed two new constructions for Dynamic Provable Data Possession scheme with Public Verifiability and Data Privacy based on the scheme presented in ACISP'15, one using Index Hash Tables and one based on Merkle Hash Trees. We show that the two schemes are secure and privacy-preserving in the random oracle model.Comment: ISPEC 201

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system

    Co-Check: Collaborative Outsourced Data Auditing in Multicloud Environment

    Get PDF
    With the increasing demand for ubiquitous connectivity, wireless technology has significantly improved our daily lives. Meanwhile, together with cloud-computing technology (e.g., cloud storage services and big data processing), new wireless networking technology becomes the foundation infrastructure of emerging communication networks. Particularly, cloud storage has been widely used in services, such as data outsourcing and resource sharing, among the heterogeneous wireless environments because of its convenience, low cost, and flexibility. However, users/clients lose the physical control of their data after outsourcing. Consequently, ensuring the integrity of the outsourced data becomes an important security requirement of cloud storage applications. In this paper, we present Co-Check, a collaborative multicloud data integrity audition scheme, which is based on BLS (Boneh-Lynn-Shacham) signature and homomorphic tags. According to the proposed scheme, clients can audit their outsourced data in a one-round challenge-response interaction with low performance overhead. Our scheme also supports dynamic data maintenance. The theoretical analysis and experiment results illustrate that our scheme is provably secure and efficient

    A Dynamic Proxy Oriented Approach for Remote Data Integrity checking along with Secure Deduplication in Cloud

    Get PDF
    In Cloud computing users store data over remote servers instead of computer�s hard drive. This leads to several security problems since data is out of the control of the user. So, to protect against the security attacks and to preserve the data integrity in the cloud, Huaqun Wang et.al proposed proxy oriented remote data integrity checking (RDIC). However, this scheme only focuses on one-way validation i.e clients have to know whether their files are stored integrally in the cloud. But this scheme does not address the problem of duplication which is essential with increasing demand for cloud storage. And as users are untrusted from the perspective of the server, there is a need to prove the ownership of the files. The proposed work considers the requirement of mutual validation. In this paper we propose a new construction of Identity based RDIC along with secure deduplication. The proposed scheme avoids burden of complex key management and flexible as it support anyone to verify the contents of the data apart from the data owner and incurs less computation cost as token generation is done by the proxy instead of user

    Public cloud data auditing with practical key update and zero knowledge privacy

    Get PDF
    Data integrity is extremely important for cloud based storage services, where cloud users no longer have physical possession of their outsourced files. A number of data auditing mechanisms have been proposed to solve this problem. However, how to update a cloud user\u27s private auditing key (as well as the authenticators those keys are associated with) without the user\u27s re-possession of the data remains an open problem. In this paper, we propose a key-updating and authenticator-evolving mechanism with zero-knowledge privacy of the stored files for secure cloud data auditing, which incorporates zero knowledge proof systems, proxy re-signatures and homomorphic linear authenticators. We instantiate our proposal with the state-of-the-art Shacham-Waters auditing scheme. When the cloud user needs to update his key, instead of downloading the entire file and re-generating all the authenticators, the user can just download and update the authenticators. This approach dramatically reduces the communication and computation cost while maintaining the desirable security. We formalize the security model of zero knowledge data privacy for auditing schemes in the key-updating context and prove the soundness and zero-knowledge privacy of the proposed construction. Finally, we analyze the complexity of communication, computation and storage costs of the improved protocol which demonstrates the practicality of the proposal

    Efficient Method Based on Blockchain Ensuring Data Integrity Auditing with Deduplication in Cloud

    Get PDF
    With the rapid development of cloud storage, more and more cloud clients can store and access their data anytime, from anywhere and using any device. Data deduplication may be considered an excellent choice to ensure data storage efficiency. Although cloud technology offers many advantages for storage service, it also introduces security challenges, especially with regards to data integrity, which is one of the most critical elements in any system. A data owner should thus enable data integrity auditing mechanisms. Much research has recently been undertaken to deal with these issues. In this paper, we propose a novel blockchain-based method, which can preserve cloud data integrity checking with data deduplication. In our method, a mediator performs data deduplication on the client side, which permits a reduction in the amount of outsourced data and a decrease in the computation time and the bandwidth used between the enterprise and the cloud service provider. This method supports private and public auditability. Our method also ensures the confidentiality of a client's data against auditors during the auditing process

    Fuzzy identity-based data integrity auditing for reliable cloud storage systems

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.As a core security issue in reliable cloud storage, data integrity has received much attention. Data auditing protocols enable a verifier to efficiently check the integrity of the outsourced data without downloading the data. A key research challenge associated with existing designs of data auditing protocols is the complexity in key management. In this paper, we seek to address the complex key management challenge in cloud data integrity checking by introducing fuzzy identity-based auditing-the first in such an approach, to the best of our knowledge. More specifically, we present the primitive of fuzzy identity-based data auditing, where a user’s identity can be viewed as a set of descriptive attributes. We formalize the system model and the security model for this new primitive. We then present a concrete construction of fuzzy identity-based auditing protocol by utilizing biometrics as the fuzzy identity. The new protocol offers the property of error-tolerance, namely, it binds private key to one identity which can be used to verify the correctness of a response generated with another identity, if and only if both identities are sufficiently close. We prove the security of our protocol based on the computational Diffie-Hellman assumption and the discrete logarithm assumption in the selective-ID security model. Finally, we develop a prototype implementation of the protocol which demonstrates the practicality of the proposal.This work is supported by the National Natural Science Foundation of China (61501333,61300213,61272436,61472083), the Fundamental Research Funds for the Central Universities under Grant ZYGX2015J05
    corecore