68,228 research outputs found
Secure Distributed Cloud Storage based on the Blockchain Technology and Smart Contracts
Objectives: This paper addresses the problem of secure data storage and sharing over cloud storage infrastructures. A secure, distributed cloud storage structure incorporating the blockchain structure is proposed that supports confidentiality, integrity, and availability. Methods/Analysis: The proposed structure combines two well-known technologies: one of them is the Ethereum Blockchain and its Smart Contracts and the other is the RSA encryption and authentication scheme. The Ethereum Blockchain is used as a data structure, which ensures data availability and integrity while RSA provides sensitive data confidentiality and source authentication. Findings: As a result, users of the proposed structure can trust it and be certain that they can securely exchange information through a publicly accessible and shared cloud storage. The application can be used either through a user interface (UI) or a command-line interface (CLI). Novelty /Improvement:The novelty of this work is that the system that is proposed could be used for secure data storage on the cloud as well as for file sharing and authentication verification. Also, secure data storage and file sharing are already offered by the proposed system. Doi: 10.28991/ESJ-2023-07-02-012 Full Text: PD
Treasure Island Security framework : A Generic Security Framework for public clouds
In this thesis we introduce a generic security framework for public clouds called Treasure Island Security framework that is designed to address the issues related to cloud computing security and specifically key-management in untrusted domains. Nowadays many cloud structure and services are provided but as an inevitable concomitant to these new products, security issues increase rapidly. Availability, integrity of data, lack of trust, confidentiality as well as security issues are also of great importance to cloud computing users; they may be more skeptical of the cloud services when they feel that they might lose the control over their data or the structures that the cloud provided for them.   Because of deferred control of data from customers to cloud providers and unknown number of third parties in between, it is almost impossible to apply traditional security methods. We present our security framework, with distributed key and sequential addressing in a simple abstract mode with a master server and adequate number of chunk servers. We assume a fixed chunk size model for large files and sequentially distribution file system with 4 separated key to decrypt/encrypt file. After reviewing the process, we analyze the Distributed Key and Sequentially Addressing Distributed file system and it's Security Risk Model. The focus of this thesis is on increasing security in untrusted domain especially in the cloud key management in public cloud. We discuss cryptographic approaches in key-management and suggest a novel cryptographic method for public cloud's key-management system based on forward-secure public key encryption, which supports a non-interactive publicly verifiable secret sharing scheme through a tree access structure. We believe that Treasure Island Security Framework can provide an increased secure environment in untrusted domains, like public cloud, in which users can securely reconstruct their secret-keys (e.g. lost passphrases). Finally, we discuss the advantages and benefits of Cloud Computing Security Framework with Distributed Key and Sequentially Addressing Distributed file system and cryptographic approaches and how it helps to improve the security levels in cloud systems.  M.S
Proneo: Large File Transfer using Webworkers and Cloud Services
Cloud computing is a term, which involves virtualization, distributed computing, networking, and web services. Efficient data transfer among the cloud server and client. Cloud storage enables users to remotely store and retrieve their data. In previous work, the data are stored in the cloud using dynamic data operation with computation which makes the user need to make a copy for further updating and verification of the data loss. The objective of our project is to propose the partitioning method & web workers for the data storage which avoids the local copy at the user side. The cryptography technologies offer encryption and decryption of the data and user authentication information to protect it from the unauthorized user or attacker. MD5 based file encryption system for exchanging information or data is included in this model. This ensures secure authentication system and hiding information from others. The Cloud server allows user to store their data on a cloud without worrying about correctness & integrity of data.
DOI: 10.17762/ijritcc2321-8169.15052
RFDA: Reliable framework for data administration based on split-merge policy
Emerging technologies in cloud environment have not only increased its use but also posed some severe issues. These issues can cause considerable harm not only to data storage but also to the large amount of data in distributed file structure which are being used in collaborative sharing. The data sharing technique in the cloud is prone to many flaws and is easily attacked. The conventional cryptographic mechanism is not robust enough to provide a secure authentication. In this paper, we overcome this issue with our proposed Reliable Framework for Data Administration (RFDA) using split-merge policy, developed to enhance data security. The proposed RFDA performs splitting of data in a unique manner using 128 AES encryption key. Different slots of the encrypted key are placed in different places of rack servers of different cloud zones. The effectiveness and efficiency of the proposed system are analyzed using comparative analysis from which it is seen that the proposed system has outperformed the existing and conventional security standard
Performance evaluation of a distributed storage service in community network clouds
Community networks are self-organized and decentralized communication networks built and operated by citizens, for citizens. The consolidation of today's cloud technologies offers now, for community networks, the possibility to collectively develop community clouds, building upon user-provided networks and extending toward cloud services. Cloud storage, and in particular secure and reliable cloud storage, could become a key community cloud service to enable end-user applications. In this paper, we evaluate in a real deployment the performance of Tahoe least-authority file system (Tahoe-LAFS), a decentralized storage system with provider-independent security that guarantees privacy to the users. We evaluate how the Tahoe-LAFS storage system performs when it is deployed over distributed community cloud nodes in a real community network such as Guifi.net. Furthermore, we evaluate Tahoe-LAFS in the Microsoft Azure commercial cloud platform, to compare and understand the impact of homogeneous network and hardware resources on the performance of the Tahoe-LAFS. We observed that the write operation of Tahoe-LAFS resulted in similar performance when using either the community network cloud or the commercial cloud. However, the read operation achieved better performance in the Azure cloud, where the reading from multiple nodes of Tahoe-LAFS benefited from the homogeneity of the network and nodes. Our results suggest that Tahoe-LAFS can run on community network clouds with suitable performance for the needed end-user experience.Peer ReviewedPreprin
Private search over big data leveraging distributed file system and parallel processing
In this work, we identify the security and privacy problems associated with a certain Big Data application, namely secure keyword-based search over encrypted cloud data and emphasize the actual challenges and technical difficulties in the Big Data setting. More specifically, we provide definitions from which privacy requirements can be derived. In addition, we adapt an existing work on privacy-preserving keyword-based search method to the Big Data setting, in which, not only data is huge but also changing and accumulating very fast. Our proposal is scalable in the sense that it can leverage distributed file systems and parallel programming techniques such as the Hadoop Distributed File System (HDFS) and the MapReduce programming model, to work with very large data sets. We also propose a lazy idf-updating method that can efficiently handle the relevancy scores of the documents in a dynamically changing, large data set. We empirically show the efficiency and accuracy of the method through extensive set of experiments on real data
Towards Practical Access Control and Usage Control on the Cloud using Trusted Hardware
Cloud-based platforms have become the principle way to store, share, and synchronize files online. For individuals and organizations alike, cloud storage not only provides resource scalability and on-demand access at a low cost, but also eliminates the necessity of provisioning and maintaining complex hardware installations.
Unfortunately, because cloud-based platforms are frequent victims of data breaches and unauthorized disclosures, data protection obliges both access control and usage control to manage user authorization and regulate future data use. Encryption can ensure data security against unauthorized parties, but complicates file sharing which now requires distributing keys to authorized users, and a mechanism that prevents revoked users from accessing or modifying sensitive content. Further, as user data is stored and processed on remote ma- chines, usage control in a distributed setting requires incorporating the local environmental context at policy evaluation, as well as tamper-proof and non-bypassable enforcement. Existing cryptographic solutions either require server-side coordination, offer limited flexibility in data sharing, or incur significant re-encryption overheads on user revocation. This combination of issues are ill-suited within large-scale distributed environments where there are a large number of users, dynamic changes in user membership and access privileges, and resources are shared across organizational domains. Thus, developing a robust security and privacy solution for the cloud requires: fine-grained access control to associate the largest set of users and resources with variable granularity, scalable administration costs when managing policies and access rights, and cross-domain policy enforcement.
To address the above challenges, this dissertation proposes a practical security solution that relies solely on commodity trusted hardware to ensure confidentiality and integrity throughout the data lifecycle. The aim is to maintain complete user ownership against external hackers and malicious service providers, without losing the scalability or availability benefits of cloud storage. Furthermore, we develop a principled approach that is: (i) portable across storage platforms without requiring any server-side support or modifications, (ii) flexible in allowing users to selectively share their data using fine-grained access control, and (iii) performant by imposing modest overheads on standard user workloads. Essentially, our system must be client-side, provide end-to-end data protection and secure sharing, without significant degradation in performance or user experience.
We introduce NeXUS, a privacy-preserving filesystem that enables cryptographic protection and secure file sharing on existing network-based storage services. NeXUS protects the confidentiality and integrity of file content, as well as file and directory names, while mitigating against rollback attacks of the filesystem hierarchy. We also introduce Joplin, a secure access control and usage control system that provides practical attribute-based sharing with decentralized policy administration, including efficient revocation, multi-domain policies, secure user delegation, and mandatory audit logging. Both systems leverage trusted hardware to prevent the leakage of sensitive material such as encryption keys and access control policies; they are completely client-side, easy to install and use, and can be readily deployed across remote storage platforms without requiring any server-side changes or trusted intermediary. We developed prototypes for NeXUS and Joplin, and evaluated their respective overheads in isolation and within a real-world environment. Results show that both prototypes introduce modest overheads on interactive workloads, and achieve portability across storage platforms, including Dropbox and AFS. Together, NeXUS and Joplin demonstrate that a client-side solution employing trusted hardware such as Intel SGX can effectively protect remotely stored data on existing file sharing services
MAZE: a secure cloud storage service using Moving Target Defense and Secure Shell Protocol (SSH) tunneling
Cloud storage services have emerged as a popular destination for businesses and individuals to securely store documents due in part to being virtually accessible anywhere, anytime. However, cloud storage systems are static attack targets enabling attackers to thoroughly study the system without fear that their conclusions about the system would be rendered inaccurate. As such, computer security researchers began exploring techniques, known as Moving Target Defense (MTD), to turn distributed systems into moving targets. Whereas traditional defense mechanisms attempt to identify and cover system vulnerabilities, the underlying philosophy of MTD is that it is impossible to build perfectly secure systems. Instead, MTD techniques attempt to constantly change the attack surface in order to increase the cost (in terms of time and resources) and difficulty of executing successful attacks, in the first place. Current research in MTD, however, is lacking in implementations of MTD techniques on real systems (rather than just simulations).
This work presents MAZE, a secure cloud storage system in which the files to be protected (e.g., security keys, account numbers or passwords) are split into pieces and pseudo-randomly dispersed within a large, continuously-changing maze of computers. Hopping from one computer to another within MAZE is only possible by following timely created doors, which are implemented using Secure Shell Protocol (SSH) tunnels. At any computer, there can be many open doors, each leading to a different computer. In order to retrieve a file, the user has to follow a schedule that is provided by the MAZE service to authorized users only. The schedule informs the client of which doors to traverse through to retrieve all the pieces of the file. In addition, computers within MAZE have two refresh periods: the first restarts the computer and reloads the system software from a clean copy in order to thwart potentially ongoing attacks, and the second modifies the file pieces to become incompatible with the file pieces before modification. In order for attackers to successfully retrieve a file, they must retrieve all file pieces within the second refresh period. We implemented MAZE and performed a series of experiments that demonstrated the potential of an MTD-based cloud storage system in protecting against attackers while providing reasonable response time
- …