342 research outputs found

    Deletion of content in large cloud storage systems

    Get PDF
    This thesis discusses the practical implications and challenges of providing secure deletion of data in cloud storage systems. Secure deletion is a desirable functionality to some users, but a requirement to others. The term secure deletion describes the practice of deleting data in such a way, that it can not be reconstructed later, even by forensic means. This work discuss the practice of secure deletion as well as existing methods that are used today. When moving from traditional on-site data storage to cloud services, these existing methods are not applicable anymore. For this reason, it presents the concept of cryptographic deletion and points out the challenge behind implementing it in a practical way. A discussion of related work in the areas of data encryption and cryptographic deletion shows that a research gap exists in applying cryptographic deletion in an efficient, practical way to cloud storage systems. The main contribution of this thesis, the Key-Cascade method, solves this issue by providing an efficient data structure for managing large numbers of encryption keys. Secure deletion is practiced today by individuals and organizations, who need to protect the confidentiality of data, after it has been deleted. It is mostly achieved by means of physical destruction or overwriting in local hard disks or large storage systems. However, these traditional methods ofoverwriting data or destroying media are not suited to large, distributed, and shared cloud storage systems. The known concept of cryptographic deletion describes storing encrypted data in an untrusted storage system, while keeping the key in a trusted location. Given that the encryption is effective, secure deletion of the data can now be achieved by securely deleting the key. Whether encryption is an acceptable protection mechanism, must be decided either by legislature or the customers themselves. This depends on whether cryptographic deletion is done to satisfy legal requirements or customer requirements. The main challenge in implementing cryptographic deletion lies in the granularity of the delete operation. Storage encryption providers today either require deleting the master key, which deletes all stored data, or require expensive copy and re-encryption operations. In the literature, a few constructions can be found that provide an optimized key management. The contributions of this thesis, found in the Key-Cascade method, expand on those findings and describe data structures and operations for implementing efficient cryptographic deletion in a cloud object store. This thesis discusses the conceptual aspects of the Key-Cascade method as well as its mathematical properties. In order to enable production use of a Key-Cascade implementation, it presents multiple extensions to the concept. These extensions improve the performance and usability and also enable frictionless integration into existing applications. With SDOS, the Secure Delete Object Store, a working implementation of the concepts and extensions is given. Its design as an API proxy is unique among the existing cryptographic deletion systems and allows integration into existing applications, without the need to modify them. The results of performance evaluations, conducted with SDOS, show that cryptographic deletion is feasible in practice. With MCM, the Micro Content Management system, this thesis also presents a larger demonstrator system for SDOS. MCM provides insight into how SDOS can be integrated into and deployed as part of a cloud data management application

    Transparent Personal Data Processing: The Road Ahead

    Get PDF
    The European General Data Protection Regulation defines a set of obligations for personal data controllers and processors. Primary obligations include: obtaining explicit consent from the data subject for the processing of personal data, providing full transparency with respect to the processing, and enabling data rectification and erasure (albeit only in certain circumstances). At the core of any transparency architecture is the logging of events in relation to the processing and sharing of personal data. The logs should enable verification that data processors abide by the access and usage control policies that have been associated with the data based on the data subject's consent and the applicable regulations. In this position paper, we: (i) identify the requirements that need to be satisfied by such a transparency architecture, (ii) examine the suitability of existing logging mechanisms in light of said requirements, and (iii) present a number of open challenges and opportunities

    Managing Access to Service Providers in Federated Identity Environments: A Case Study in a Cloud Storage Service

    Get PDF
    © 2015 IEEE. Currently the diversity of services, which are adhering to Identity Federation, has raised new challenges in the area. Increasingly, service providers need to control the access to their resources by users from the federation as, even though the user is authenticated by the federation, its access to resources cannot be taken for granted. Each Service Provider (SP) of a federation implements their own access control mechanism. Moreover, SPs might need to allow different access control granularity. For instance, all users from a particular Identity Provider (IdP) may access the resources due to some financial agreement. On the other hand, it might be the case that only specific users, or groups of users, have access to the resources. This paper proposes a solution to this problem through a hierarchical authorization system. Our approach, which can be customized to different SPs, allows the SP administrator to manage which IdPs, or users, have access to the provided resources. In order to demonstrate the feasibility of our approach, we present a case study in the context of a cloud storage solution

    A Survey on Design and Implementation of Protected Searchable Data in the Cloud

    Get PDF
    While cloud computing has exploded in popularity in recent years thanks to the potential efficiency and cost savings of outsourcing the storage and management of data and applications, a number of vulnerabilities that led to multiple attacks have deterred many potential users. As a result, experts in the field argued that new mechanisms are needed in order to create trusted and secure cloud services. Such mechanisms would eradicate the suspicion of users towards cloud computing by providing the necessary security guarantees. Searchable Encryption is among the most promising solutions - one that has the potential to help offer truly secure and privacy-preserving cloud services. We start this paper by surveying the most important searchable encryption schemes and their relevance to cloud computing. In light of this analysis we demonstrate the inefficiencies of the existing schemes and expand our analysis by discussing certain confidentiality and privacy issues. Further, we examine how to integrate such a scheme with a popular cloud platform. Finally, we have chosen - based on the findings of our analysis - an existing scheme and implemented it to review its practical maturity for deployment in real systems. The survey of the field, together with the analysis and with the extensive experimental results provides a comprehensive review of the theoretical and practical aspects of searchable encryption

    A security concept for distributed data processing systems

    Get PDF
    Today, the amount of raw data available is abundant. As only a small part of this data is in a form fit for further processing, there is many data left to analyze and process. At the same time, cloud services are ubiquitous and allow even small businesses to perform large tasks of distributed data processing without the significant costs required for a suitable computational infrastructure. However, as more and more users transfer their data into the cloud for processing and storage, concerns about data security arise. An extensive review of data security research in today's cloud solutions confirms these concerns to be justified. The existing strategies for securing one's data are not adequate for many use cases. Therefore, this work proposes a holistic security concept for distributed data processing in the cloud. For the purpose of providing security in heterogeneous cloud environments, it statically analyzes a data flow prior to execution and determines the optimal security measurements. Without imposing strict requirements on the cloud services involved, it can be deployed in a broad range of scenarios. The concept's generic design can be adopted by existing data rocessing tools. An exemplary implementation is provided for the mashup tool FlexMash. Requirements, such as data confidentiality, integrity, access control, and scalability were evaluated to be met.Die heutige Menge an vorhandenen Daten ist enorm. Viele davon müssen zunächst verarbeitet und analysiert werden, da nur ein geringer Teil dieser Daten für die weitere Verarbeitung geeignet ist. Cloud-basierte Dienste sind allgegenwärtig und erlauben es auch kleineren Unternehmen Datenverarbeitung durchzuführen, ohne die Kosten von notwendiger Infrastruktur tragen zu müssen. Mit einer zunehmenden Zahl an Nutzern von Clouds wachsen jedoch auch Bedenken der Sicherheit. Eine ausführliche Durchsicht der aktuellen Forschung zu diesem Thema bestätigt diese Bedenken und existierende Strategien zur Sicherung der eigenen Daten berücksichtigen viele Fälle nicht. Daher stellt diese Arbeit ein ganzheitliches Sicherheitskonzept für die verteilte Datenverarbeitung in der Cloud vor. Damit Sicherheit in heterogenen Cloudumgebungen gewährleistet werden kann, wird ein Datenfluss vor der Ausführung statisch analysiert und es werden die für diesen Fluss optimalen Sicherheitsmaßnahmen festgelegt. Das Konzept besitzt einen breiten Anwendungsbereich, da keine straffen Anforderungen an die genutzten Dienste gestellt werden. Das generische Design des Konzepts ermöglicht eine einfache Integration in bereits existierende Datenverarbeitungsanwendungen, wie beispielhaft an FlexMash gezeigt wird. Anforderungen, wie die Vertraulichkeit von Daten, deren Integrität, Zugriffskontrolle und Skalierbarkeit des Systems konnten erreicht werden

    Access Control Management for Secure Cloud Storage

    Get PDF
    With the widespread success and adoption of cloud-based solutions, we are witnessing an ever increasing reliance on external providers for storing and managing data. This evolution is greatly facilitated by the availability of solutions - typically based on encryption - ensuring the confidentiality of externally outsourced data against the storing provider itself. Selective application of encryption (i.e., with different keys depending on the authorizations holding on data) provides a convenient approach to access control policy enforcement. Effective realization of such policy-based encryption entails addressing several problems related to key management, access control enforcement, and authorization revocation, while ensuring efficiency of access and deployment with current technology. We present the design and implementation of an approach to realize policy-based encryption for enforcing access control in OpenStack Swift. We also report experimental results evaluating and comparing different implementation choices of our approach

    Swiftmend: Data Synchronization in Open mHealth Applications with Restricted Connectivity

    Get PDF
    Open mHealth applications often include mobile devices and cloud services with replicated data between components. These replicas need periodical synchronization to remain consistent. However, there are no guarantee of connectivity to networks which do not bill users on the quantity of data usage. This thesis propose Swiftmend, a system with synchronization that minimize the quantity of I/O used on the network. Swiftmend includes two reconciliation algorithms; Rejuvenation and Regrowth. The latter utilizes the efficiency of the Merkle tree data structure to reduce the I/O. Merkle trees can sum up the consistency of replicas into compact fingerprints. While the first reconciliation algorithm, Rejuvenation simply inspects the entire replica to identify consistency. Regrowth is shown to produce less quantity of I/O than Rejuvenation when synchronizing replicas. This is due to the compact fingerprints

    Cloud technology options towards Free Flow of Data

    Get PDF
    This whitepaper collects the technology solutions that the projects in the Data Protection, Security and Privacy Cluster propose to address the challenges raised by the working areas of the Free Flow of Data initiative. The document describes the technologies, methodologies, models, and tools researched and developed by the clustered projects mapped to the ten areas of work of the Free Flow of Data initiative. The aim is to facilitate the identification of the state-of-the-art of technology options towards solving the data security and privacy challenges posed by the Free Flow of Data initiative in Europe. The document gives reference to the Cluster, the individual projects and the technologies produced by them

    DECENTRALIZING THE INTERNET OF MEDICAL THINGS: THE INTERPLANETARY HEALTH LAYER

    Get PDF
    Medical mobile applications have the potential to revolutionize the healthcare industry by providing patients with easy access to their personal health information, enabling them to communicate with healthcare providers remotely and consequently improving patient outcomes by providing personalized health information. However, these applications are usually limited by privacy and security issues. A possible solution is to exploit decentralization distributing privacy concerns directly to users. Solutions enabling this vision are closely linked to Distributed Ledger Technologies that have the potential to revolutionize the healthcare industry by creating a secure and transparent system for managing patient data without a central authority. The decentralized nature of the technology allows for the creation of an international data layer that is accessible to authorized parties while preserving patient privacy. This thesis envisions the InterPlanetary Health Layer along with its implementation attempt called Halo Network and an Internet of Medical Things application called Balance as a use case. Throughout the thesis, we explore the benefits and limitations of using the technology, analyze potential use cases, and look out for future directions.Medical mobile applications have the potential to revolutionize the healthcare industry by providing patients with easy access to their personal health information, enabling them to communicate with healthcare providers remotely and consequently improving patient outcomes by providing personalized health information. However, these applications are usually limited by privacy and security issues. A possible solution is to exploit decentralization distributing privacy concerns directly to users. Solutions enabling this vision are closely linked to Distributed Ledger Technologies that have the potential to revolutionize the healthcare industry by creating a secure and transparent system for managing patient data without a central authority. The decentralized nature of the technology allows for the creation of an international data layer that is accessible to authorized parties while preserving patient privacy. This thesis envisions the InterPlanetary Health Layer along with its implementation attempt called Halo Network and an Internet of Medical Things application called Balance as a use case. Throughout the thesis, we explore the benefits and limitations of using the technology, analyze potential use cases, and look out for future directions
    corecore