257 research outputs found

    Secure Storage with Replication and Transparent Deduplication

    Get PDF
    We seek to answer the following question: To what extent can we deduplicate replicated storage ? To answer this question, we design ReDup, a secure storage system that provides users with strong integrity, reliability, and transparency guarantees about data that is outsourced at cloud storage providers. Users store multiple replicas of their data at different storage servers, and the data at each storage server is deduplicated across users. Remote data integrity mechanisms are used to check the integrity of replicas. We consider a strong adversarial model, in which collusions are allowed between storage servers and also between storage servers and dishonest users of the system. A cloud storage provider (CSP) could store less replicas than agreed upon by contract, unbeknownst to honest users. ReDup defends against such adversaries by making replica generation to be time consuming so that a dishonest CSP cannot generate replicas on the fly when challenged by the users. In addition, ReDup employs transparent deduplication, which means that users get a proof attesting the deduplication level used for their files at each replica server, and thus are able to benefit from the storage savings provided by deduplication. The proof is obtained by aggregating individual proofs from replica servers, and has a constant size regardless of the number of replica servers. Our solution scales better than state of the art and is provably secure under standard assumptions

    A comprehensive meta-analysis of cryptographic security mechanisms for cloud computing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The concept of cloud computing offers measurable computational or information resources as a service over the Internet. The major motivation behind the cloud setup is economic benefits, because it assures the reduction in expenditure for operational and infrastructural purposes. To transform it into a reality there are some impediments and hurdles which are required to be tackled, most profound of which are security, privacy and reliability issues. As the user data is revealed to the cloud, it departs the protection-sphere of the data owner. However, this brings partly new security and privacy concerns. This work focuses on these issues related to various cloud services and deployment models by spotlighting their major challenges. While the classical cryptography is an ancient discipline, modern cryptography, which has been mostly developed in the last few decades, is the subject of study which needs to be implemented so as to ensure strong security and privacy mechanisms in today’s real-world scenarios. The technological solutions, short and long term research goals of the cloud security will be described and addressed using various classical cryptographic mechanisms as well as modern ones. This work explores the new directions in cloud computing security, while highlighting the correct selection of these fundamental technologies from cryptographic point of view

    Sigmoid(x): secure distributed network storage

    Get PDF
    Secure data storage is a serious problem for computer users today, particularly in enterprise environments. As data requirements grow, traditional approaches of secured silos are showing their limitations. They represent a single – or at least, limited – point of failure, and require significant, and increasing, maintenance and overhead. Such solutions are totally unsuitable for consumers, who want a ‘plug and play’ secure solution for their increasing datasets – something with the ubiquity of access of Facebook or webmail. Network providers can provide centralised solutions, but that returns us to the first problem. Sigmoid(x) takes a completely different approach – a scalable, distributed, secure storage mechanism which shares data storage between the users themselves

    What if keys are leaked? Towards practical and secure re-encryption in deduplication-based cloud storage

    Get PDF
    By only storing a unique copy of duplicate data possessed by different data owners, deduplication can significantly reduce storage cost, and hence is used broadly in public clouds. When combining with confidentiality, deduplication will become problematic as encryption performed by different data owners may differentiate identical data which may then become not deduplicable. The Message-Locked Encryption (MLE) is thus utilized to derive the same encryption key for the identical data, by which the encrypted data are still deduplicable after being encrypted by different data owners. As keys may be leaked over time, re-encrypting outsourced data is of paramount importance to ensure continuous confidentiality, which, however, has not been well addressed in the literature. In this paper, we design SEDER, a SEcure client-side Deduplication system enabling Efficient Re-encryption for cloud storage by (1) leveraging all-or-nothing transform (AONT), (2) designing a new delegated re-encryption (DRE), and (3) proposing a new proof of ownership scheme for encrypted cloud data (PoWC). Security analysis and experimental evaluation validate security and efficiency of SEDER, respectively

    An Overview of Data Storage in Cloud Computing

    Get PDF
    Cloud computing is a functional paradigm that is evolving and making IT utilization easier by the day for consumers. Cloud computing offers standardized applications to users online and in a manner that can be accessed regularly. Such applications can be accessed by as many persons as permitted within an organisation without bothering about the maintenance of such application. The Cloud also provides a channel to design and deploy user applications including its storage space and database without bothering about the underlying operating system. The application can run without consideration for on-premise infrastructure. Also, the Cloud makes massive storage available both for data and databases. Storage of data on the Cloud is one of the core activities in Cloud computing. Storage utilizes infrastructure spread across several geographical locations. Storage on the Cloud makes use of the internet, virtualization, encryption and others technologies to ensure security of data. This paper presents the state of the art from some literature available on Cloud storage. The study was executed by means of review of literature available on Cloud storage. It examines present trends in the area of Cloud storage and provides a guide for future research. The objective of this paper is to answer the question of what the current trend and development in Cloud storage is? The expected result at the end of this review is the identification of trends in Cloud storage, which can beneficial to prospective Cloud researches, users and even providers

    D1.3 - SUPERCLOUD Architecture Implementation

    Get PDF
    In this document we describe the implementation of the SUPERCLOUD architecture. The architecture provides an abstraction layer on top of which SUPERCLOUD users can realize SUPERCLOUD services encompassing secure computation workloads, secure and privacy-preserving resilient data storage and secure networking resources spanning across different cloud service providers' computation, data storage and network resources. The components of the SUPERCLOUD architecture implementation are described. Integration between the different layers of the architecture (computing security, data protection, network security) and with the facilities for security self-management is also highlighted. Finally, we provide download and installation instructions for the released software components that can be downloaded from our common SUPERCLOUD code repository

    DATS - data containers for web applications

    Get PDF
    Data containers enable users to control access to their data while untrusted applications compute on it. However, they require replicating an application inside each container - compromising functionality, programmability, and performance. We propose DATS - a system to run web applications that retains application usability and efficiency through a mix of hardware capability enhanced containers and the introduction of two new primitives modeled after the popular model-view-controller (MVC) pattern. (1) DATS introduces a templating language to create views that compose data across data containers. (2) DATS uses authenticated storage and confinement to enable an untrusted storage service, such as memcached and deduplication, to operate on plain-text data across containers. These two primitives act as robust declassifiers that allow DATS to enforce non-interference across containers, taking large applications out of the trusted computing base (TCB). We showcase eight different web applications including Gitlab and a Slack-like chat, significantly improve the worst-case overheads due to application replication, and demonstrate usable performance for common-case usage

    Towards Exascale Scientific Metadata Management

    Full text link
    Advances in technology and computing hardware are enabling scientists from all areas of science to produce massive amounts of data using large-scale simulations or observational facilities. In this era of data deluge, effective coordination between the data production and the analysis phases hinges on the availability of metadata that describe the scientific datasets. Existing workflow engines have been capturing a limited form of metadata to provide provenance information about the identity and lineage of the data. However, much of the data produced by simulations, experiments, and analyses still need to be annotated manually in an ad hoc manner by domain scientists. Systematic and transparent acquisition of rich metadata becomes a crucial prerequisite to sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and domain-agnostic metadata management infrastructure that can meet the demands of extreme-scale science is notable by its absence. To address this gap in scientific data management research and practice, we present our vision for an integrated approach that (1) automatically captures and manipulates information-rich metadata while the data is being produced or analyzed and (2) stores metadata within each dataset to permeate metadata-oblivious processes and to query metadata through established and standardized data access interfaces. We motivate the need for the proposed integrated approach using applications from plasma physics, climate modeling and neuroscience, and then discuss research challenges and possible solutions

    Efficient, Dependable Storage of Human Genome Sequencing Data

    Get PDF
    A compreensão do genoma humano impacta várias áreas da vida. Os dados oriundos do genoma humano são enormes pois existem milhões de amostras a espera de serem sequenciadas e cada genoma humano sequenciado pode ocupar centenas de gigabytes de espaço de armazenamento. Os genomas humanos são críticos porque são extremamente valiosos para a investigação e porque podem fornecer informações delicadas sobre o estado de saúde dos indivíduos, identificar os seus dadores ou até mesmo revelar informações sobre os parentes destes. O tamanho e a criticidade destes genomas, para além da quantidade de dados produzidos por instituições médicas e de ciências da vida, exigem que os sistemas informáticos sejam escaláveis, ao mesmo tempo que sejam seguros, confiáveis, auditáveis e com custos acessíveis. As infraestruturas de armazenamento existentes são tão caras que não nos permitem ignorar a eficiência de custos no armazenamento de genomas humanos, assim como em geral estas não possuem o conhecimento e os mecanismos adequados para proteger a privacidade dos dadores de amostras biológicas. Esta tese propõe um sistema de armazenamento de genomas humanos eficiente, seguro e auditável para instituições médicas e de ciências da vida. Ele aprimora os ecossistemas de armazenamento tradicionais com técnicas de privacidade, redução do tamanho dos dados e auditabilidade a fim de permitir o uso eficiente e confiável de infraestruturas públicas de computação em nuvem para armazenar genomas humanos. As contribuições desta tese incluem (1) um estudo sobre a sensibilidade à privacidade dos genomas humanos; (2) um método para detetar sistematicamente as porções dos genomas que são sensíveis à privacidade; (3) algoritmos de redução do tamanho de dados, especializados para dados de genomas sequenciados; (4) um esquema de auditoria independente para armazenamento disperso e seguro de dados; e (5) um fluxo de armazenamento completo que obtém garantias razoáveis de proteção, segurança e confiabilidade a custos modestos (por exemplo, menos de 1/Genoma/Ano),integrandoosmecanismospropostosaconfigurac\co~esdearmazenamentoapropriadasTheunderstandingofhumangenomeimpactsseveralareasofhumanlife.Datafromhumangenomesismassivebecausetherearemillionsofsamplestobesequenced,andeachsequencedhumangenomemaysizehundredsofgigabytes.Humangenomesarecriticalbecausetheyareextremelyvaluabletoresearchandmayprovidehintsonindividuals’healthstatus,identifytheirdonors,orrevealinformationaboutdonors’relatives.Theirsizeandcriticality,plustheamountofdatabeingproducedbymedicalandlife−sciencesinstitutions,requiresystemstoscalewhilebeingsecure,dependable,auditable,andaffordable.Currentstorageinfrastructuresaretooexpensivetoignorecostefficiencyinstoringhumangenomes,andtheylacktheproperknowledgeandmechanismstoprotecttheprivacyofsampledonors.Thisthesisproposesanefficientstoragesystemforhumangenomesthatmedicalandlifesciencesinstitutionsmaytrustandafford.Itenhancestraditionalstorageecosystemswithprivacy−aware,data−reduction,andauditabilitytechniquestoenabletheefficient,dependableuseofmulti−tenantinfrastructurestostorehumangenomes.Contributionsfromthisthesisinclude(1)astudyontheprivacy−sensitivityofhumangenomes;(2)todetectgenomes’privacy−sensitiveportionssystematically;(3)specialiseddatareductionalgorithmsforsequencingdata;(4)anindependentauditabilityschemeforsecuredispersedstorage;and(5)acompletestoragepipelinethatobtainsreasonableprivacyprotection,security,anddependabilityguaranteesatmodestcosts(e.g.,lessthan1/Genoma/Ano), integrando os mecanismos propostos a configurações de armazenamento apropriadasThe understanding of human genome impacts several areas of human life. Data from human genomes is massive because there are millions of samples to be sequenced, and each sequenced human genome may size hundreds of gigabytes. Human genomes are critical because they are extremely valuable to research and may provide hints on individuals’ health status, identify their donors, or reveal information about donors’ relatives. Their size and criticality, plus the amount of data being produced by medical and life-sciences institutions, require systems to scale while being secure, dependable, auditable, and affordable. Current storage infrastructures are too expensive to ignore cost efficiency in storing human genomes, and they lack the proper knowledge and mechanisms to protect the privacy of sample donors. This thesis proposes an efficient storage system for human genomes that medical and lifesciences institutions may trust and afford. It enhances traditional storage ecosystems with privacy-aware, data-reduction, and auditability techniques to enable the efficient, dependable use of multi-tenant infrastructures to store human genomes. Contributions from this thesis include (1) a study on the privacy-sensitivity of human genomes; (2) to detect genomes’ privacy-sensitive portions systematically; (3) specialised data reduction algorithms for sequencing data; (4) an independent auditability scheme for secure dispersed storage; and (5) a complete storage pipeline that obtains reasonable privacy protection, security, and dependability guarantees at modest costs (e.g., less than 1/Genome/Year) by integrating the proposed mechanisms with appropriate storage configurations
    • …
    corecore