767 research outputs found

    Efficient integrity verification of replicated data in cloud

    Get PDF
    The cloud computing is an emerging model in which computing infrastructure resources are provided as a service over the Internet. Data owners can outsource their data by remotely storing them in the cloud and enjoy on-demand high quality services from a shared pool of configurable computing resources. By using these data storage services, the data owners can relieve the burden of local data storage and maintenance. However, since data owners and the cloud servers are not in the same trusted domain, the outsourced data may be at risk as the cloud server may no longer be fully trusted. Therefore, data integrity is of critical importance in such a scenario. Cloud should let the owners or a trusted third party to check for the integrity of their data storage without demanding a local copy of the data. Owners often replicate their data on the cloud servers across multiple data centers to provide a higher level of scalability, availability, and durability. When the data owners ask the Cloud Service Provider (CSP) to replicate data, they are charged a higher storage fee by the CSP. Therefore, the data owners need to be strongly convinced that the CSP is storing data copies agreed on in the service level contract, and data-updates have been correctly executed on all the remotely stored copies. In this thesis, a Dynamic Multi-Replica Provable Data Possession scheme (DMR-PDP) is proposed that prevents the CSP from cheating; for example, by maintaining fewer copies than paid for and/or tampering data. In addition, we also extended the scheme to support a basic file versioning system where only the difference between the original file and the updated file is propagated rather than the propagation of operations for privacy reasons. DMR-PDP also supports efficient dynamic operations like block modification, insertion and deletion on replicas over the cloud servers --Abstract, page iii

    Remote Data Integrity Checking in Cloud Computing

    Get PDF
    Cloud computing is an internet based computing which enables sharing of services. It is very challenging part to keep safely all required data that are needed in many applica f or user in cloud. Storing our data in cloud may not be fully trustworthy. Since client doesnt have copy of all stored data, he has to depend on Cloud Service Provider. This work studies the problem of ensuring the integrity and security of data storage in Cloud Computing. This paper, proposes an effective and flexible Batch Audit sche me with dynamic data support to reduce the computation overheads. To ensure the correctness of users data the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the data stored in the cloud. We consider symmetric encryption for effective utilization of outsourced cloud data under the model, it achieve the storage security in multi cloud data storage. The new scheme further supports secure and efficient dynamic operation sondata blocks, including data i nserti on, update,delete and replacement. Extensive securityand performance analysis shows that the proposed sche me is highlyef ficient and resilient again st By zantinef ailure, maliciousd a ta modification at tack, and even server colliding a ttacks

    Multiple TPA for Extensive Reliability and Security in Cloud Storage

    Get PDF
    This paper focuses on the security and integrity of data hold on cloud data servers. The data integrity verification is finished by employing a third party auditor who is authorized to check integrity of data sporadically on behalf of client. The way application software and databases are stored has been modified. Currently they are stored in cloud data centers in which security is an apprehension from client point of view. The new development which is used to store and manage data without principal investment has brought many security challenges which are not thoroughly understood. Customer data from third party auditor notices of when data integrity is lost. The proposed system not only supports data integrity verification but also supports data mobility. Prior work has been done in this online data mobility and there is lack of true public auditability. Auditor task is to monitor data modifications like insert and remove. The proposed system is able to support both public auditability and data mobility. Problems with existing systems literature review has revealed that the motivation behind this work take up. Merkle hash tree block-level authentication is used to improve. Auditing functions to handle together Bilinear transform overall signature is used. The TPA for multiple clients concurrently enables audit. So here we evaluate the TPA based multi-user system. Experiments show that the proposed system is also very efficient and safe. For proposed work we are using multiple TPA's and multiple clients. DOI: 10.17762/ijritcc2321-8169.160412

    Co-Check: Collaborative Outsourced Data Auditing in Multicloud Environment

    Get PDF
    With the increasing demand for ubiquitous connectivity, wireless technology has significantly improved our daily lives. Meanwhile, together with cloud-computing technology (e.g., cloud storage services and big data processing), new wireless networking technology becomes the foundation infrastructure of emerging communication networks. Particularly, cloud storage has been widely used in services, such as data outsourcing and resource sharing, among the heterogeneous wireless environments because of its convenience, low cost, and flexibility. However, users/clients lose the physical control of their data after outsourcing. Consequently, ensuring the integrity of the outsourced data becomes an important security requirement of cloud storage applications. In this paper, we present Co-Check, a collaborative multicloud data integrity audition scheme, which is based on BLS (Boneh-Lynn-Shacham) signature and homomorphic tags. According to the proposed scheme, clients can audit their outsourced data in a one-round challenge-response interaction with low performance overhead. Our scheme also supports dynamic data maintenance. The theoretical analysis and experiment results illustrate that our scheme is provably secure and efficient

    Group Based Secure Sharing of Cloud Data with Provable Data Freshness

    Get PDF
    With cloud computing technology it is realized that data can be outsource and such data can also be shared among users of cloud. However, the data outsourced to cloud might be subjected to integrity problems due to the problems in the underlying hardware or software errors. Human errors also may contribute to the integrity problems. Many techniques came into existence in order to ensure data integrity. Most of the techniques have some sort of auditing. Public auditing schemes meant for data integrity of shared data might disclose confidential information. To overcome this problem, recently, Wang et al. proposed a novel approach that supports public auditing and also do not disclose confidential information. They exploited ring signatures that are used to compute verification metadata on the fly in order to audit the correctness of shared data. The public verifiers do not know the identity of the signer. It does mean that the verifier can verify data without knowing the identity of the signer. However, this scheme does not consider the freshness of data which is very important in cloud services. Obtaining latest copy of data is very important to avoid stale data access in cloud. Towards this end, in this paper, we proposed an algorithm for ensuring freshness of the data while retrieving the outsourced data in multi-user environment. Our empirical results revealed that the proposed algorithm is efficient. DOI: 10.17762/ijritcc2321-8169.15065

    Identity-based remote data integrity checking with perfect data privacy preserving for cloud storage

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Remote data integrity checking (RDIC) enables a data storage server, such as a cloud server, to prove to a verifier that it is actually storing a data owner’s data honestly. To date, a number of RDIC protocols have been proposed in the literature, but almost all the constructions suffer from the issue of a complex key management, that is, they rely on the expensive public key infrastructure (PKI), which might hinder the deployment of RDIC in practice. In this paper, we propose a new construction of identity-based (ID-based) RDIC protocol by making use of key-homomorphic cryptographic primitive to reduce the system complexity and the cost for establishing and managing the public key authentication framework in PKI based RDIC schemes. We formalize ID-based RDIC and its security model including security against a malicious cloud server and zero knowledge privacy against a third party verifier. We then provide a concrete construction of ID-based RDIC scheme which leaks no information of the stored files to the verifier during the RDIC process. The new construction is proven secure against the malicious server in the generic group model and achieves zero knowledge privacy against a verifier. Extensive security analysis and implementation results demonstrate that the proposed new protocol is provably secure and practical in the real-world applications.This work is supported by the National Natural Science Foundation of China (61501333,61300213,61272436,61472083), Fok Ying Tung Education Foundation (141065), Program for New Century Excellent Talents in Fujian University (JA1406

    Blockchain & Multi-Agent System: A New Promising Approach for Cloud Data Integrity Auditing with Deduplication

    Get PDF
    Recently, data storage represents one of the most important services in Cloud Computing. The cloud provider should ensure two major requirements which are data integrity and storage efficiency. Blockchain data structure and the efficient data deduplication represent possible solutions to address these exigencies. Several approaches have been proposed, some of them implement deduplication in Cloud server side, which involves a lot of computation to eliminate the redundant data and it becomes more and more complex. Therefore, this paper proposed an efficient, reliable and secure approach, in which the authors propose a Multi-Agent System in order to manipulate deduplication technique that permits to reduce data volumes thereby reduce storage overhead. On the other side, the loss of physical control over data introduces security challenges such as data loss, data tampering and data modification. To solve similar problems, the authors also propose Blockchain as a database for storing metadata of client files. This database serves as logging database that ensures data integrity auditing function

    Public cloud data auditing with practical key update and zero knowledge privacy

    Get PDF
    Data integrity is extremely important for cloud based storage services, where cloud users no longer have physical possession of their outsourced files. A number of data auditing mechanisms have been proposed to solve this problem. However, how to update a cloud user\u27s private auditing key (as well as the authenticators those keys are associated with) without the user\u27s re-possession of the data remains an open problem. In this paper, we propose a key-updating and authenticator-evolving mechanism with zero-knowledge privacy of the stored files for secure cloud data auditing, which incorporates zero knowledge proof systems, proxy re-signatures and homomorphic linear authenticators. We instantiate our proposal with the state-of-the-art Shacham-Waters auditing scheme. When the cloud user needs to update his key, instead of downloading the entire file and re-generating all the authenticators, the user can just download and update the authenticators. This approach dramatically reduces the communication and computation cost while maintaining the desirable security. We formalize the security model of zero knowledge data privacy for auditing schemes in the key-updating context and prove the soundness and zero-knowledge privacy of the proposed construction. Finally, we analyze the complexity of communication, computation and storage costs of the improved protocol which demonstrates the practicality of the proposal
    corecore