78 research outputs found

    An Oblivious Parallel RAM with O(log2N)O(\log^2 N) Parallel Runtime Blowup

    Get PDF
    Oblivious RAM (ORAM) is a cryptographic primitive that allows a client to access memory locations from a server without revealing its access patterns. Oblivious Parallel RAM (OPRAM) is a PRAM counterpart of Oblivious RAM, i.e., it allows mm clients that trust each other to simultaneously access data from a server without revealing their access patterns. The best known OPRAM scheme achieves amortized client-server bandwidth of O(log2N)O(\log^2 N) per lookup, but they do not achieve perfectly linear access time speedup with clients. In fact, for each access, the blowup for the slowest client (also known as parallel runtime blowup) is O(f(m)logmlog2N),f(m)=ω(1)O(f(m)\log m\log^2 N), f(m) = \omega(1). This implies that, for most accesses, some clients remain idle while others are accessing data. In this work, we show an OPRAM scheme that has parallel runtime blowup of O(log2N)O(\log^2 N) while maintaining O(log2N)O(\log^2 N) client-server bandwidth blowup for each client

    Perfectly Oblivious (Parallel) RAM Revisited, and Improved Constructions

    Get PDF
    Oblivious RAM (ORAM) is a technique for compiling any RAM program to an oblivious counterpart, i.e., one whose access patterns do not leak information about the secret inputs. Similarly, Oblivious Parallel RAM (OPRAM) compiles a {\it parallel} RAM program to an oblivious counterpart. In this paper, we care about ORAM/OPRAM with {\it perfect security}, i.e., the access patterns must be {\it identically distributed} no matter what the program\u27s memory request sequence is. In the past, two types of perfect ORAMs/OPRAMs have been considered: constructions whose performance bounds hold {\it in expectation} (but may occasionally run more slowly); and constructions whose performance bounds hold {\it deterministically} (even though the algorithms themselves are randomized). In this paper, we revisit the performance metrics for perfect ORAM/OPRAM, and show novel constructions that achieve asymptotical improvements for all performance metrics. Our first result is a new perfectly secure OPRAM scheme with O(log3N/loglogN)O(\log^3 N/\log \log N) {\it expected} overhead. In comparison, prior literature has been stuck at O(log3N)O(\log^3 N) for more than a decade. Next, we show how to construct a perfect ORAM with O(log3N/loglogN)O(\log^3 N/\log \log N) {\it deterministic} simulation overhead. We further show how to make the scheme parallel, resulting in an perfect OPRAM with O(log4N/loglogN)O(\log^4 N/\log \log N) {\it deterministic} simulation overhead. For perfect ORAMs/OPRAMs with deterministic performance bounds, our results achieve {\it subexponential} improvement over the state-of-the-art. Specifically, the best known prior scheme incurs more than N\sqrt{N} deterministic simulation overhead (Raskin and Simkin, Asiacrypt\u2719); moreover, their scheme works only for the sequential setting and is {\it not} amenable to parallelization. Finally, we additionally consider perfect ORAMs/OPRAMs whose performance bounds hold with high probability. For this new performance metric, we show new constructions whose simulation overhead is upper bounded by O(log3/loglogN)O(\log^3 /\log\log N) except with negligible in NN probability, i.e., we prove high-probability performance bounds that match the expected bounds mentioned earlier

    Differentially Oblivious Database Joins: Overcoming the Worst-Case Curse of Fully Oblivious Algorithms

    Get PDF
    Numerous high-profile works have shown that access patterns to even encrypted databases can leak secret information and sometimes even lead to reconstruction of the entire database. To thwart access pattern leakage, the literature has focused on oblivious algorithms, where obliviousness requires that the access patterns leak nothing about the input data. In this paper, we consider the Join operator, an important database primitive that has been extensively studied and optimized. Unfortunately, any fully oblivious Join algorithm would require always padding the result to the worst-case length which is quadratic in the data size N. In comparison, an insecure baseline incurs only O(R + N) cost where R is the true result length, and in the common case in practice, R is relatively short. As a typical example, when R = O(N), any fully oblivious algorithm must inherently incur a prohibitive, N-fold slowdown relative to the insecure baseline. Indeed, the (non-private) database and algorithms literature invariably focuses on studying the instance-specific rather than worst-case performance of database algorithms. Unfortunately, the stringent notion of full obliviousness precludes the design of efficient algorithms with non-trivial instance-specific performance. To overcome this worst-case performance barrier of full obliviousness and enable algorithms with good instance-specific performance, we consider a relaxed notion of access pattern privacy called (?, ?)-differential obliviousness (DO), originally proposed in the seminal work of Chan et al. (SODA\u2719). Rather than insisting that the access patterns leak no information whatsoever, the relaxed DO notion requires that the access patterns satisfy (?, ?)-differential privacy. We show that by adopting the relaxed DO notion, we can obtain efficient database Join mechanisms whose instance-specific performance approximately matches the insecure baseline, while still offering a meaningful notion of privacy to individual users. Complementing our upper bound results, we also prove new lower bounds regarding the performance of any DO Join algorithm. Differential obliviousness (DO) is a new notion and is a relatively unexplored territory. Following the pioneering investigations by Chan et al. and others, our work is among the very first to formally explore how DO can help overcome the worst-case performance curse of full obliviousness; moreover, we motivate our work with database applications. Our work shows new evidence why DO might be a promising notion, and opens up several exciting future directions

    Circuit OPRAM: Unifying Statistically and Computationally Secure ORAMs and OPRAMs

    Get PDF
    An Oblivious Parallel RAM (OPRAM) provides a general method to simulate any Parallel RAM (PRAM) program, such that the resulting memory access patterns leak nothing about secret inputs. OPRAM was originally proposed by Boyle et al. as the natural parallel counterpart of Oblivious RAM (ORAM), which was shown to have broad applications, e.g., in cloud outsourcing, secure processor design, and secure multi-party computation. Since parallelism is common in modern computing architectures such as multi-core processors or cluster computing, OPRAM is naturally a powerful and desirable building block as much as its sequential counterpart ORAM is. Although earlier works have shown how to construct OPRAM schemes with polylogarithmic simulation overhead, in comparison with best known sequential ORAM constructions, all existing OPRAM schemes are (poly-)logarithmic factors more expensive. In this paper, we present a new framework in which we construct both statistically secure and computationally secure OPRAM schemes whose asymptotical performance matches the best known ORAM schemes in each setting. Since an OPRAM scheme with simulation overhead X directly implies an ORAM scheme with simulation overhead X, our result can be regarded as providing a unifying framework in which we can subsume all known results on statistically and computationally secure ORAMs and OPRAMs alike. Particularly for the case of OPRAMs, we also improve the state-of-the-art scheme by superlogarithmic factors. To achieve the aforementioned results requires us to combine a variety of techniques involving 1) efficient parallel oblivious algorithm design; and 2) designing tight randomized algorithms and proving measure concentration bounds about the rather involved stochastic process induced by the OPRAM algorithm

    GraphSC: Parallel secure computation made easy

    Get PDF
    Abstract-We propose introducing modern parallel programming paradigms to secure computation, enabling their secure execution on large datasets. To address this challenge, we present GraphSC, a framework that (i) provides a programming paradigm that allows non-cryptography experts to write secure code; (ii) brings parallelism to such secure implementations; and (iii) meets the needs for obliviousness, thereby not leaking any private information. Using GraphSC, developers can efficiently implement an oblivious version of graph-based algorithms (including sophisticated data mining and machine learning algorithms) that execute in parallel with minimal communication overhead. Importantly, our secure version of graph-based algorithms incurs a small logarithmic overhead in comparison with the non-secure parallel version. We build GraphSC and demonstrate, using several algorithms as examples, that secure computation can be brought into the realm of practicality for big data analysis. Our secure matrix factorization implementation can process 1 million ratings in 13 hours, which is a multiple order-of-magnitude improvement over the only other existing attempt, which requires 3 hours to process 16K ratings

    Oblivious Hashing Revisited, and Applications to Asymptotically Efficient ORAM and OPRAM

    Get PDF
    Oblivious RAM (ORAM) is a powerful cryptographic building block that allows a program to provably hide its access patterns to sensitive data. Since the original proposal of ORAM by Goldreich and Ostrovsky, numerous improvements have been made. To date, the best asymptotic overhead achievable for general block sizes is O(log2N/loglogN)O(\log^2 N/\log \log N), due to an elegant scheme by Kushilevitz et al., which in turn relies on the oblivious Cuckoo hashing scheme by Goodrich and Mitzenmacher. In this paper, we make the following contributions: we first revisit the prior O(log2N/loglogN)O(\log^2 N/\log \log N)-overhead ORAM result. We demonstrate the somewhat incompleteness of this prior result, due to the subtle incompleteness of a core building block, namely, Goodrich and Mitzenmacher\u27s oblivious Cuckoo hashing scheme. Even though we do show how to patch the prior result such that we can fully realize Goodrich and Mitzenmacher\u27s elegant blueprint for oblivious Cuckoo hashing, it is clear that the extreme complexity of oblivious Cuckoo hashing has made understanding, implementation, and proofs difficult. We show that there is a conceptually simple O(log2N/loglogN)O(\log^2 N/\log \log N)-overhead ORAM that dispenses with oblivious Cuckoo hashing entirely. We show that such a conceptually simple scheme lends to further extensions. Specifically, we obtain the first O(log2N/loglogN)O(\log^2 N/\log \log N) Oblivious Parallel RAM (OPRAM) scheme, thus not only matching the performance of the best known sequential ORAM, but also achieving super-logarithmic improvements in comparison with known OPRAM schemes

    Efficient Data-Oblivious Computation

    Get PDF
    The rapid increase in the amount of data stored by cloud servers has resulted in growing privacy concerns for users. First, although keeping data encrypted at all times is an attractive approach to privacy, encryption may preclude mining and learning useful patterns from data. Second, companies are unable to distribute proprietary programs to other parties without risking the loss of their private code when those programs are reverse engineered. A challenge underlying both those problems is that how data is accessed — even when that data is encrypted — can leak secret information. Oblivious RAM is a well studied cryptographic primitive that can be used to solve the underlying challenge of hiding data-access patterns. In this dissertation, we improve Oblivious RAMs and oblivious algorithms asymptotically. We then show how to apply our novel oblivious algorithms to build systems that enable privacy-preserving computation on encrypted data and program obfuscation. Specifically, the first part of this dissertation shows two efficient Oblivious RAM algorithms: 1) The first algorithm achieves sub-logarithmic bandwidth blowup while only incurring an inexpensive XOR computation for performing Private Information Retrieval operations, and 2) The second algorithm is the first perfectly-secure Oblivious Parallel RAM with O(log3N)O(\log^3 N ) bandwidth blowup, O((logm+loglogN)logN)O((\log m + \log \log N)\log N) depth blowup, and O(1)O(1) space blowup when the PRAM has mm CPUs and stores NN blocks of data. The second part of this dissertation describes two systems — HOP and GraphSC — that address the problem of computing on private data and the distribution of proprietary programs. HOP is a system that achieves simulation-secure obfuscation of RAM programs assuming secure hardware. It is the first prototype implementation of a provably secure virtual black-box (VBB) obfuscation scheme in any model under any assumptions. GraphSC is a system that allows cloud servers to run a class of data-mining and machine-learning algorithms over users’ data without learning anything about that data. GraphSC brings efficient, parallel secure computation to programmers by allowing them to express computation tasks using the GraphLab abstraction. It is backed by the first non-trivial parallel oblivious algorithms that outperform generic Oblivious RAMs

    More is Less: Perfectly Secure Oblivious Algorithms in the Multi-Server Setting

    Get PDF
    The problem of Oblivious RAM (ORAM) has traditionally been studied in a single-server setting, but more recently the multi-server setting has also been considered. Yet it is still unclear whether the multi-server setting has any inherent advantages, e.g., whether the multi-server setting can be used to achieve stronger security goals or provably better efficiency than is possible in the single-server case. In this work, we construct a perfectly secure 3-server ORAM scheme that outperforms the best known single-server scheme by a logarithmic factor. In the process, we also show, for the first time, that there exist specific algorithms for which multiple servers can overcome known lower bounds in the single-server setting.Comment: 36 pages, Accepted in Asiacrypt 201

    On the Depth of Oblivious Parallel RAM

    Get PDF
    Oblivious Parallel RAM (OPRAM), first proposed by Boyle, Chung, and Pass, is the natural parallel extension of Oblivious RAM (ORAM). OPRAM provides a powerful cryptographic building block for hiding the access patterns of programs to sensitive data, while preserving the paralellism inherent in the original program. All prior OPRAM schemes adopt a single metric of ``simulation overhead\u27\u27 that characterizes the blowup in parallel runtime, assuming that oblivious simulation is constrained to using the same number of CPUs as the original PRAM. In this paper, we ask whether oblivious simulation of PRAM programs can be further sped up if the OPRAM is allowed to have more CPUs than the original PRAM. We thus initiate a study to understand the true depth of OPRAM schemes (i.e., when the OPRAM may have access to unbounded number of CPUs). On the upper bound front, we construct a new OPRAM scheme that gains a logarithmic factor in depth and without incurring extra blowup in total work in comparison with the state-of-the-art OPRAM scheme. On the lower bound side, we demonstrate fundamental limits on the depth any OPRAM scheme --- even when the OPRAM is allowed to have an unbounded number of CPUs and blow up total work arbitrarily. We further show that our upper bound result is optimal in depth for a reasonably large parameter regime that is of particular interest in practice

    Cache-Oblivious and Data-Oblivious Sorting and Applications

    Get PDF
    Although external-memory sorting has been a classical algorithms abstraction and has been heavily studied in the literature, perhaps somewhat surprisingly, when data-obliviousness is a requirement, even very rudimentary questions remain open. Prior to our work, it is not even known how to construct a comparison-based, external-memory oblivious sorting algorithm that is optimal in IO-cost. We make a significant step forward in our understanding of external-memory, oblivious sorting algorithms. Not only do we construct a comparison-based, external-memory oblivious sorting algorithm that is optimal in IO-cost, our algorithm is also cache-agnostic in that the algorithm need not know the storage hierarchy\u27s internal parameters such as the cache and cache-line sizes. Our result immediately implies a cache-agnostic ORAM construction whose asymptotical IO-cost matches the best known cache-aware scheme. Last but not the least, we propose and adopt a new and stronger security notion for external-memory, oblivious algorithms and argue that this new notion is desirable for resisting possible cache-timing attacks. Thus our work also lays a foundation for the study of oblivious algorithms in the cache-agnostic model
    corecore