1,945 research outputs found

    Authenticated Key-Value Stores with Hardware Enclaves

    Full text link
    Authenticated data storage on an untrusted platform is an important computing paradigm for cloud applications ranging from big-data outsourcing, to cryptocurrency and certificate transparency log. These modern applications increasingly feature update-intensive workloads, whereas existing authenticated data structures (ADSs) designed with in-place updates are inefficient to handle such workloads. In this paper, we address this issue and propose a novel authenticated log-structured merge tree (eLSM) based key-value store by leveraging Intel SGX enclaves. We present a system design that runs the code of eLSM store inside enclave. To circumvent the limited enclave memory (128 MB with the latest Intel CPUs), we propose to place the memory buffer of the eLSM store outside the enclave and protect the buffer using a new authenticated data structure by digesting individual LSM-tree levels. We design protocols to support query authentication in data integrity, completeness (under range queries), and freshness. The proof in our protocol is made small by including only the Merkle proofs at selective levels. We implement eLSM on top of Google LevelDB and Facebook RocksDB with minimal code change and performance interference. We evaluate the performance of eLSM under the YCSB workload benchmark and show a performance advantage of up to 4.5X speedup.Comment: eLSM, Enclave, key-value store, ADS, 18 page

    Integrity Authentication for SQL Query Evaluation on Outsourced Databases: A Survey

    Full text link
    Spurred by the development of cloud computing, there has been considerable recent interest in the Database-as-a-Service (DaaS) paradigm. Users lacking in expertise or computational resources can outsource their data and database management needs to a third-party service provider. Outsourcing, however, raises an important issue of result integrity: how can the client verify with lightweight overhead that the query results returned by the service provider are correct (i.e., the same as the results of query execution locally)? This survey focuses on categorizing and reviewing the progress on the current approaches for result integrity of SQL query evaluation in the DaaS model. The survey also includes some potential future research directions for result integrity verification of the outsourced computations

    Verifying Search Results Over Web Collections

    Full text link
    Searching accounts for one of the most frequently performed computations over the Internet as well as one of the most important applications of outsourced computing, producing results that critically affect users' decision-making behaviors. As such, verifying the integrity of Internet-based searches over vast amounts of web contents is essential. We provide the first solution to this general security problem. We introduce the concept of an authenticated web crawler and present the design and prototype implementation of this new concept. An authenticated web crawler is a trusted program that computes a special "signature" ss of a collection of web contents it visits. Subject to this signature, web searches can be verified to be correct with respect to the integrity of their produced results. This signature also allows the verification of complicated queries on web pages, such as conjunctive keyword searches. In our solution, along with the web pages that satisfy any given search query, the search engine also returns a cryptographic proof. This proof, together with the signature ss, enables any user to efficiently verify that no legitimate web pages are omitted from the result computed by the search engine, and that no pages that are non-conforming with the query are included in the result. An important property of our solution is that the proof size and the verification time both depend solely on the sizes of the query description and the query result, but not on the number or sizes of the web pages over which the search is performed. Our authentication protocols are based on standard Merkle trees and the more involved bilinear-map accumulators. As we experimentally demonstrate, the prototype implementation of our system gives a low communication overhead between the search engine and the user, and allows for fast verification of the returned results on the user side

    A Java Data Security Framework (JDSF) and its Case Studies

    Full text link
    We present the design of something we call Confidentiality, Integrity and Authentication Sub-Frameworks, which are a part of a more general Java Data Security Framework (JDSF) designed to support various aspects related to data security (confidentiality, origin authentication, integrity, and SQL randomization). The JDSF was originally designed in 2007 for use in the two use-cases, MARF and HSQLDB, to allow a plug-in-like implementation of and verification of various security aspects and their generalization. The JDSF project explores secure data storage related issues from the point of view of data security in the two projects. A variety of common security aspects and tasks were considered in order to extract a spectrum of possible parameters these aspects require for the design an extensible frameworked API and its implementation. A particular challenge being tackled is an aggregation of diverse approaches and algorithms into a common set of Java APIs to cover all or at least most common aspects, and, at the same time keeping the framework as simple as possible. As a part of the framework, we provide the mentioned sub-frameworks' APIs to allow for the common algorithm implementations of the confidentiality, integrity, and authentication aspects for MARF's and HSQLDB's database(s). At the same time we perform a detailed overview of the related work and literature on data and database security that we considered as a possible input to design the JDSF.Comment: a 2007 project report; parts appeared in various conferences; includes inde

    A Scalable, Trustworthy Infrastructure for Collaborative Container Repositories

    Full text link
    We present a scalable "Trustworthy Container Repository" (TCR) infrastructure for the storage of software container images, such as those used by Docker. Using an authenticated data structure based on index-ordered Merkle trees (IOMTs), TCR aims to provide assurances of 1) Integrity, 2) Availability, and 3) Confidentiality to its users, whose containers are stored in an untrusted environment. Trust within the TCR architecture is rooted in a low-complexity, tamper-resistant trusted module. The use of IOMTs allows such a module to efficiently track a virtually unlimited number of container images, and thus provide the desired assurances for the system's users. Using a simulated version of the proposed system, we demonstrate the scalability of platform by showing logarithmic time complexity up to 2252^{25} (32 million) container images. This paper presents both algorithmic and proof-of-concept software implementations of the proposed TCR infrastructure

    vChain: Enabling Verifiable Boolean Range Queries over Blockchain Databases

    Full text link
    Blockchains have recently been under the spotlight due to the boom of cryptocurrencies and decentralized applications. There is an increasing demand for querying the data stored in a blockchain database. To ensure query integrity, the user can maintain the entire blockchain database and query the data locally. However, this approach is not economic, if not infeasible, because of the blockchain's huge data size and considerable maintenance costs. In this paper, we take the first step toward investigating the problem of verifiable query processing over blockchain databases. We propose a novel framework, called vChain, that alleviates the storage and computing costs of the user and employs verifiable queries to guarantee the results' integrity. To support verifiable Boolean range queries, we propose an accumulator-based authenticated data structure that enables dynamic aggregation over arbitrary query attributes. Two new indexes are further developed to aggregate intra-block and inter-block data records for efficient query verification. We also propose an inverted prefix tree structure to accelerate the processing of a large number of subscription queries simultaneously. Security analysis and empirical study validate the robustness and practicality of the proposed techniques

    Enabling Strong Database Integrity using Trusted Execution Environments

    Full text link
    Many applications require the immutable and consistent sharing of data across organizational boundaries. Because conventional datastores cannot provide this functionality, blockchains have been proposed as one possible solution. Yet public blockchains are energy inefficient, hard to scale and suffer from limited throughput and high latencies, while permissioned blockchains depend on specially designated nodes, potentially leak meta-information, and also suffer from scale and performance bottlenecks. This paper presents CreDB, a datastore that provides blockchain-like guarantees of integrity using trusted execution environments. CreDB employs four novel mechanisms to support a new class of applications. First, it creates a permanent record of every transaction, known as a witness, that clients can then use not only to audit the database but to prove to third parties that desired actions took place. Second, it associates with every object an inseparable and inviolable policy, which not only performs access control but enables the datastore to implement state machines whose behavior is amenable to analysis. Third, timeline inspection allows authorized parties to inspect and reason about the history of changes made to the data. Finally, CreDB provides a protected function evaluation mechanism that allows integrity-protected computation over private data. The paper describes these mechanisms, and the applications they collectively enable, in detail. We have fully implemented a prototype of CreDB on Intel SGX. Evaluation shows that CreDB can serve as a drop-in replacement for other NoSQL stores, such as MongoDB while providing stronger integrity guarantees

    Security and Privacy Aspects in MapReduce on Clouds: A Survey

    Full text link
    MapReduce is a programming system for distributed processing large-scale data in an efficient and fault tolerant manner on a private, public, or hybrid cloud. MapReduce is extensively used daily around the world as an efficient distributed computation tool for a large class of problems, e.g., search, clustering, log analysis, different types of join operations, matrix multiplication, pattern matching, and analysis of social networks. Security and privacy of data and MapReduce computations are essential concerns when a MapReduce computation is executed in public or hybrid clouds. In order to execute a MapReduce job in public and hybrid clouds, authentication of mappers-reducers, confidentiality of data-computations, integrity of data-computations, and correctness-freshness of the outputs are required. Satisfying these requirements shield the operation from several types of attacks on data and MapReduce computations. In this paper, we investigate and discuss security and privacy challenges and requirements, considering a variety of adversarial capabilities, and characteristics in the scope of MapReduce. We also provide a review of existing security and privacy protocols for MapReduce and discuss their overhead issues.Comment: Accepted in Elsevier Computer Science Revie

    Efficient Authenticated Data Structures for Graph Connectivity and Geometric Search Problems

    Full text link
    Authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being controlled by a remote untrusted host. We present efficient techniques for authenticating data structures that represent graphs and collections of geometric objects. We introduce the path hash accumulator, a new primitive based on cryptographic hashing for efficiently authenticating various properties of structured data represented as paths, including any decomposable query over sequences of elements. We show how to employ our primitive to authenticate queries about properties of paths in graphs and search queries on multi-catalogs. This allows the design of new, efficient authenticated data structures for fundamental problems on networks, such as path and connectivity queries over graphs, and complex queries on two-dimensional geometric objects, such as intersection and containment queries.Comment: Full version of related paper appearing in CT-RSA 200

    Efficient Query Verification on Outsourced Data: A Game-Theoretic Approach

    Full text link
    To save time and money, businesses and individuals have begun outsourcing their data and computations to cloud computing services. These entities would, however, like to ensure that the queries they request from the cloud services are being computed correctly. In this paper, we use the principles of economics and competition to vastly reduce the complexity of query verification on outsourced data. We consider two cases: First, we consider the scenario where multiple non-colluding data outsourcing services exist, and then we consider the case where only a single outsourcing service exists. Using a game theoretic model, we show that given the proper incentive structure, we can effectively deter dishonest behavior on the part of the data outsourcing services with very few computational and monetary resources. We prove that the incentive for an outsourcing service to cheat can be reduced to zero. Finally, we show that a simple verification method can achieve this reduction through extensive experimental evaluation.Comment: 13 pages, 8 figures, pre-publicatio
    • …
    corecore