9 research outputs found

    Achieve High Verifiability using Proxy Resignature and TPA in User Revocation within the Cloud

    Get PDF
    Using Cloud Storage, users can remotely store their data and enjoy the on-demand high quality applications and services from cloud. User can get relaxation from the burden of local data storage and maintenance. In addition, we have an efficient probabilistic query and audit services to improve the performance of approach based on periodic confirmation. So that the users existing blocks by themselves do not need to sign up and download the proxy by using the idea of re-signatures, we block the user revocation on behalf of existing users to the cloud, the signing in again to allow for..In addition, a public Verifier always without retrieving all of the data shared data is able to audit the integrity of Cloud, even if part of the shared data has been signed by the cloud again. Moreover, our system by multiple auditing functions with batch verification audit is able to support. Experimental results show that our system fairly can improve the efficiency of user cancellation. Data storage and sharing services in the cloud, users can easily modify and share data in a group. Shared data to ensure unity in public, group users shared data to calculate signatures on all blocks need to be verified. Shared data by different users in different blocks are usually due to data revisions have been signed by individual users. The proposed system considers proxy resign, if the user from group get revoked. Cloud is able to resign block, which was created previously by the revoked user with existing users private kye. As a result, user revocation can be greatly improved, and capacity of computing and communications resources of existing users can be saved. DOI: 10.17762/ijritcc2321-8169.15062

    Improved Third Party Auditing Approach For Shared Data In The Cloud With Efficient Revocation of User

    Get PDF
    Verify the integrity of the shared information publically, users within the cluster to ensure that shared information all got to figure out the signatures on blocks. Sharing information by different users in different blocks of information typically changes entirely by individual users are signed. Once a user has canceled the cluster, for security reasons, blocks that antecedently the revoked by signed by associate an existing user must sign in nursing again. The Direct transfer of information sharing that same methodology, half and this user to sign in again over the cancellation of existing user in nursing associate permits, mostly due to the size of share data within is disabled. Over the course of this paper, we share information with the user in mind affordable revocation is a completely unique integrity of public auditing mechanisms to propose is a trend. Proxy re-signature thought of using signatures we didn’t order that transfer existing user and blocks by themselves again to sign on behalf of the current cloud blocks users. User to sign in again over are knowledge with the rest of the latest version of the cluster is the cancellation, to allow for a trend. In addition, a public vouchers are often part of the shared although some information has been signed by the cloud while cannot share to retrieve information from the cloud, complete information to audit the integrity of is ready. In addition, our systems at the same time by multiple auditing functions to support verification, auditing is in batch. Experimental results show that our system fairly can improve the efficiency of user cancellation. DOI: 10.17762/ijritcc2321-8169.15073

    A SCALABLE APPROACH TOWARDS MANAGEMENT OF CONSISTENT DATA IN CLOUD SETTING

    Get PDF
    A number of modern works spotlighted on preservation of identity privacy from public verifiers during auditing of shared data integrity. Towards ensuring of shared data integrity can be confirmed publicly, users within group need to work out signatures on the entire blocks in shared data. In our work we put forward Panda, which is a new public auditing method for the integrity of shared information with well-organized user revocation within cloud. This method is helpful and scalable, which indicates that it is not only competent to maintain a huge number of users to allocate data and but also proficient to handle numerous auditing tasks simultaneously with batch auditing. It is capable to sustain batch auditing by means of verifying numerous auditing tasks at the same time and is resourceful and secure for the duration of user revocation. By scheming of the proxy re-signature system with fine properties, which traditional proxy re-signatures do not contain, our method is constantly able to make sure reliability of shared data devoid of retrieving the total data from cloud

    Shared Data Integrity Using Public Auditing Mechanism

    Get PDF
    Cloud providers assure a safer and dependable environment to the users, the honesty of data in the cloud may still be cooperation, due to the survival of hardware/software failures and human errors. To make certain shared data honesty can be established publicly, users in the group require calculating signatures on all the blocks in shared data. Dissimilar blocks in shared data are usually signed by different users due to data changes do by different users. For security reasons, once a user is cancelled from the group, the blocks which were beforehand signed by this revoked user must be re-signed by an existing user. The straightforward method which agrees to an existing user to download the parallel part of shared data and re-signs it during user revocation, is inept due to the large size of shared data in the cloud. In this paper, we recommend a new public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By employing the plan of proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users in user revocation, so that existing users do not need to download and re-sign blocks by themselves

    Recent trends in applying TPM to cloud computing

    Get PDF
    Trusted platform modules (TPM) have become important safe‐guards against variety of software‐based attacks. By providing a limited set of cryptographic services through a well‐defined interface, separated from the software itself, TPM can serve as a root of trust and as a building block for higher‐level security measures. This article surveys the literature for applications of TPM in the cloud‐computing environment, with publication dates comprised between 2013 and 2018. It identifies the current trends and objectives of this technology in the cloud, and the type of threats that it mitigates. Toward the end, the main research gaps are pinpointed and discussed. Since integrity measurement is one of the main usages of TPM, special attention is paid to the assessment of run time phases and software layers it is applied to.</p

    Data Auditing and Security in Cloud Computing: Issues, Challenges and Future Directions

    Get PDF
    Cloud computing is one of the significant development that utilizes progressive computational power and upgrades data distribution and data storing facilities. With cloud information services, it is essential for information to be saved in the cloud and also distributed across numerous customers. Cloud information repository is involved with issues of information integrity, data security and information access by unapproved users. Hence, an autonomous reviewing and auditing facility is necessary to guarantee that the information is effectively accommodated and used in the cloud. In this paper, a comprehensive survey on the state-of-art techniques in data auditing and security are discussed. Challenging problems in information repository auditing and security are presented. Finally, directions for future research in data auditing and security have been discussed

    Data auditing and security in cloud computing: issues, challenges and future directions

    Get PDF
    Cloud computing is one of the significant development that utilizes progressive computational power and upgrades data distribution and data storing facilities. With cloud information services, it is essential for information to be saved in the cloud and also distributed across numerous customers. Cloud information repository is involved with issues of information integrity, data security and information access by unapproved users. Hence, an autonomous reviewing and auditing facility is necessary to guarantee that the information is effectively accommodated and used in the cloud. In this paper, a comprehensive survey on the state-of-art techniques in data auditing and security are discussed. Challenging problems in information repository auditing and security are presented. Finally, directions for future research in data auditing and security have been discusse

    New directions for remote data integrity checking of cloud storage

    Get PDF
    Cloud storage services allow data owners to outsource their data, and thus reduce their workload and cost in data storage and management. However, most data owners today are still reluctant to outsource their data to the cloud storage providers (CSP), simply because they do not trust the CSPs, and have no confidence that the CSPs will secure their valuable data. This dissertation focuses on Remote Data Checking (RDC), a collection of protocols which can allow a client (data owner) to check the integrity of data outsourced at an untrusted server, and thus to audit whether the server fulfills its contractual obligations. Robustness has not been considered for the dynamic RDCs in the literature. The R-DPDP scheme being designed is the first RDC scheme that provides robustness and, at the same time, supports dynamic data updates, while requiring small, constant, client storage. The main challenge that has to be overcome is to reduce the client-server communication during updates under an adversarial setting. A security analysis for R-DPDP is provided. Single-server RDCs are useful to detect server misbehavior, but do not have provisions to recover damaged data. Thus in practice, they should be extended to a distributed setting, in which the data is stored redundantly at multiple servers. The client can use RDC to check each server and, upon having detected a corrupted server, it can repair this server by retrieving data from healthy servers, so that the reliability level can be maintained. Previously, RDC has been investigated for replication-based and erasure coding-based distributed storage systems. However, RDC has not been investigated for network coding-based distributed storage systems that rely on untrusted servers. RDC-NC is the first RDC scheme for network coding-based distributed storage systems to ensure data remain intact when faced with data corruption, replay, and pollution attacks. Experimental evaluation shows that RDC-NC is inexpensive for both the clients and the servers. The setting considered so far outsources the storage of the data, but the data owner is still heavily involved in the data management process (especially during the repair of damaged data). A new paradigm is proposed, in which the data owner fully outsources both the data storage and the management of the data. In traditional distributed RDC schemes, the repair phase imposes a significant burden on the client, who needs to expend a significant amount of computation and communication, thus, it is very difficult to keep the client lightweight. A new self-repairing concept is developed, in which the servers are responsible to repair the corruption, while the client acts as a lightweight coordinator during repair. To realize this new concept, two novel RDC schemes, RDC-SR and ERDC-SR, are designed for replication-based distributed storage systems, which enable Server-side Repair and minimize the load on the client side. Version control systems (VCS) provide the ability to track and control changes made to the data over time. The changes are usually stored in a VCS repository which, due to its massive size, is often hosted at an untrusted CSP. RDC can be used to address concerns about the untrusted nature of the VCS server by allowing a data owner to periodically check that the server continues to store the data. The RDC-AVCS scheme being designed relies on RDC to ensure all the data versions are retrievable from the untrusted server over time. The RDC-AVCS prototype built on top of Apache SVN only incurs a modest decrease in performance compared to a regular (non-secure) SVN system
    corecore