30 research outputs found

    Demonstrating data possession and uncheatable data transfer

    Get PDF
    We observe that a certain RSA-based secure hash function is homomorphic. We describe a protocol based on this hash function which prevents `cheating\u27 in a data transfer transaction, while placing little burden on the trusted third party that oversees the protocol. We also describe a cryptographic protocol based on similar principles, through which a prover can demonstrate possession of an arbitrary set of data known to the verifier. The verifier isn\u27t required to have this data at hand during the protocol execution, but rather only a small hash of it. The protocol is also provably as secure as integer factoring

    Formulating a Security Layer of Cloud Data Storage Framework Based on Multi Agent System Architecture

    Get PDF
    The tremendous growth of the cloud computingenvironments requires new architecture for security services.In addition, these computing environments are open,and users may be connected or disconnected at any time.Cloud Data Storage, like any other emerging technology, isexperiencing growing pains. It is immature, it is fragmentedand it lacks standardization. To verify the correctness, integrity,confidentially and availability of users’ data in the cloud, wepropose a security framework. This security framework consistsof two main layers as agent layer and cloud data storage layer.The propose MAS architecture includes five types of agents: UserInterface Agent (UIA), User Agent (UA), DER Agent (DERA),Data Retrieval Agent (DRA) and Data Distribution PreparationAgent (DDPA). The main goal of this paper is to formulate oursecure framework and its architecture

    METHOD TO ACHIEVE SECURITY AND STORAGE SERVICES IN CLOUD COMPUTING

    Get PDF
    Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users ‘physical possession of their outsourced data, which inevitably poses new security risks toward the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphism token and distributed erasure-coded data. The proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis shows the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks

    Studying Security Issues in HPC (Super Computer) Environment

    Get PDF
    HPC has evolved from being a buzzword to becoming one of the most exciting areas in the field of Information Technology & Computer Science. Organizations are increasingly looking to HPC to improve operational efficiency, reduce expenditure over time and improve the computational power. Using Super Computers hosted on a particular location and connected with the Internet can reduce the installation of computational power and making it centralise. However, centralise system has some advantages and disadvantages over the distributed system, but we avoid discussing those issues and focusing more on the HPC systems. HPC can also be used to build web and file server and for applications of cloud computing. Due to cluster type architecture and high processing speed, we have experienced that it works far better and handles the loads in much more efficient manner then series of desktop with normal configuration connected together for application of cloud computing and network applications. In this paper we have discussed on issues re lated to security of data and information on the context of HPC. Data and information are vanurable to security and safety. It is the purpose of this paper to present some practical security issues related to High Performance Computing Environment. Based on our observation on security requirements of HPC we have discuss some existing security technologies used in HPC. When observed to various literatures, we found that the existing techniques are not enough. We have discussed, some of the key issues relating to this context. Lastly, we have made an approach to find an appropriate solution using Blowfish encryption and decryption algorithm. We hope that, with our proposed concepts, HPC applications to perform better and in safer way. At the end, we have proposed a modified blow fish algorithmic technique by attaching random number generator algorithm to make the encryption decryption technique more appropriate for our own HPC environment

    Efficient integrity verification of replicated data in cloud

    Get PDF
    The cloud computing is an emerging model in which computing infrastructure resources are provided as a service over the Internet. Data owners can outsource their data by remotely storing them in the cloud and enjoy on-demand high quality services from a shared pool of configurable computing resources. By using these data storage services, the data owners can relieve the burden of local data storage and maintenance. However, since data owners and the cloud servers are not in the same trusted domain, the outsourced data may be at risk as the cloud server may no longer be fully trusted. Therefore, data integrity is of critical importance in such a scenario. Cloud should let the owners or a trusted third party to check for the integrity of their data storage without demanding a local copy of the data. Owners often replicate their data on the cloud servers across multiple data centers to provide a higher level of scalability, availability, and durability. When the data owners ask the Cloud Service Provider (CSP) to replicate data, they are charged a higher storage fee by the CSP. Therefore, the data owners need to be strongly convinced that the CSP is storing data copies agreed on in the service level contract, and data-updates have been correctly executed on all the remotely stored copies. In this thesis, a Dynamic Multi-Replica Provable Data Possession scheme (DMR-PDP) is proposed that prevents the CSP from cheating; for example, by maintaining fewer copies than paid for and/or tampering data. In addition, we also extended the scheme to support a basic file versioning system where only the difference between the original file and the updated file is propagated rather than the propagation of operations for privacy reasons. DMR-PDP also supports efficient dynamic operations like block modification, insertion and deletion on replicas over the cloud servers --Abstract, page iii
    corecore