567 research outputs found

    What Storage Access Privacy is Achievable with Small Overhead?

    Get PDF
    Oblivious RAM (ORAM) and private information retrieval (PIR) are classic cryptographic primitives used to hide the access pattern to data whose storage has been outsourced to an untrusted server. Unfortunately, both primitives require considerable overhead compared to plaintext access. For large-scale storage infrastructure with highly frequent access requests, the degradation in response time and the exorbitant increase in resource costs incurred by either ORAM or PIR prevent their usage. In an ideal scenario, a privacy-preserving storage protocols with small overhead would be implemented for these heavily trafficked storage systems to avoid negatively impacting either performance and/or costs. In this work, we study the problem of the best $\mathit{storage\ access\ privacy}thatisachievablewithonly that is achievable with only \mathit{small\ overhead}overplaintextaccess.Toanswerthisquestion,weconsider over plaintext access. To answer this question, we consider \mathit{differential\ privacy\ access}whichisageneralizationofthe which is a generalization of the \mathit{oblivious\ access}securitynotionthatareconsideredbyORAMandPIR.Quitesurprisingly,wepresentstrongevidencethatconstantoverheadstorageschemesmayonlybeachievedwithprivacybudgetsof security notion that are considered by ORAM and PIR. Quite surprisingly, we present strong evidence that constant overhead storage schemes may only be achieved with privacy budgets of \epsilon = \Omega(\log n).WepresentasymptoticallyoptimalconstructionsfordifferentiallyprivatevariantsofbothORAMandPIRwithprivacybudgets. We present asymptotically optimal constructions for differentially private variants of both ORAM and PIR with privacy budgets \epsilon = \Theta(\log n)withonly with only O(1)overhead.Inaddition,weconsideramorecomplexstorageprimitivecalledkeyvaluestorageinwhichdataisindexedbykeysfromalargeuniverse(asopposedtoconsecutiveintegersinORAMandPIR).Wepresentadifferentiallyprivatekeyvaluestorageschemewith overhead. In addition, we consider a more complex storage primitive called key-value storage in which data is indexed by keys from a large universe (as opposed to consecutive integers in ORAM and PIR). We present a differentially private key-value storage scheme with \epsilon = \Theta(\log n)and and O(\log\log n)$ overhead. This construction uses a new oblivious, two-choice hashing scheme that may be of independent interest.Comment: To appear at PODS'1

    Exploring Differential Obliviousness

    Get PDF
    In a recent paper, Chan et al. [SODA \u2719] proposed a relaxation of the notion of (full) memory obliviousness, which was introduced by Goldreich and Ostrovsky [J. ACM \u2796] and extensively researched by cryptographers. The new notion, differential obliviousness, requires that any two neighboring inputs exhibit similar memory access patterns, where the similarity requirement is that of differential privacy. Chan et al. demonstrated that differential obliviousness allows achieving improved efficiency for several algorithmic tasks, including sorting, merging of sorted lists, and range query data structures. In this work, we continue the exploration of differential obliviousness, focusing on algorithms that do not necessarily examine all their input. This choice is motivated by the fact that the existence of logarithmic overhead ORAM protocols implies that differential obliviousness can yield at most a logarithmic improvement in efficiency for computations that need to examine all their input. In particular, we explore property testing, where we show that differential obliviousness yields an almost linear improvement in overhead in the dense graph model, and at most quadratic improvement in the bounded degree model. We also explore tasks where a non-oblivious algorithm would need to explore different portions of the input, where the latter would depend on the input itself, and where we show that such a behavior can be maintained under differential obliviousness, but not under full obliviousness. Our examples suggest that there would be benefits in further exploring which class of computational tasks are amenable to differential obliviousness

    Prochlo: Strong Privacy for Analytics in the Crowd

    Full text link
    The large-scale monitoring of computer users' software activities has become commonplace, e.g., for application telemetry, error reporting, or demographic profiling. This paper describes a principled systems architecture---Encode, Shuffle, Analyze (ESA)---for performing such monitoring with high utility while also protecting user privacy. The ESA design, and its Prochlo implementation, are informed by our practical experiences with an existing, large deployment of privacy-preserving software monitoring. (cont.; see the paper

    Differentially Oblivious Turing Machines

    Get PDF
    Oblivious RAM (ORAM) is a machinery that protects any RAM from leaking information about its secret input by observing only the access pattern. It is known that every ORAM must incur a logarithmic overhead compared to the non-oblivious RAM. In fact, even the seemingly weaker notion of differential obliviousness, which intuitively "protects" a single access by guaranteeing that the observed access pattern for every two "neighboring" logical access sequences satisfy (?,?)-differential privacy, is subject to a logarithmic lower bound. In this work, we show that any Turing machine computation can be generically compiled into a differentially oblivious one with only doubly logarithmic overhead. More precisely, given a Turing machine that makes N transitions, the compiled Turing machine makes O(N ? log log N) transitions in total and the physical head movements sequence satisfies (?,?)-differential privacy (for a constant ? and a negligible ?). We additionally show that ?(log log N) overhead is necessary in a natural range of parameters (and in the balls and bins model). As a corollary, we show that there exist natural data structures such as stack and queues (supporting online operations) on N elements for which there is a differentially oblivious implementation on a Turing machine incurring amortized O(log log N) overhead per operation, while it is known that any oblivious implementation must consume ?(log N) operations unconditionally even on a RAM. Therefore, we obtain the first unconditional separation between obliviousness and differential obliviousness in the most natural setting of parameters where ? is a constant and ? is negligible. Before this work, such a separation was only known in the balls and bins model. Note that the lower bound applies in the RAM model while our upper bound is in the Turing machine model, making our separation stronger

    Differentially Oblivious Database Joins: Overcoming the Worst-Case Curse of Fully Oblivious Algorithms

    Get PDF
    Numerous high-profile works have shown that access patterns to even encrypted databases can leak secret information and sometimes even lead to reconstruction of the entire database. To thwart access pattern leakage, the literature has focused on oblivious algorithms, where obliviousness requires that the access patterns leak nothing about the input data. In this paper, we consider the Join operator, an important database primitive that has been extensively studied and optimized. Unfortunately, any fully oblivious Join algorithm would require always padding the result to the worst-case length which is quadratic in the data size N. In comparison, an insecure baseline incurs only O(R + N) cost where R is the true result length, and in the common case in practice, R is relatively short. As a typical example, when R = O(N), any fully oblivious algorithm must inherently incur a prohibitive, N-fold slowdown relative to the insecure baseline. Indeed, the (non-private) database and algorithms literature invariably focuses on studying the instance-specific rather than worst-case performance of database algorithms. Unfortunately, the stringent notion of full obliviousness precludes the design of efficient algorithms with non-trivial instance-specific performance. To overcome this worst-case performance barrier of full obliviousness and enable algorithms with good instance-specific performance, we consider a relaxed notion of access pattern privacy called (?, ?)-differential obliviousness (DO), originally proposed in the seminal work of Chan et al. (SODA\u2719). Rather than insisting that the access patterns leak no information whatsoever, the relaxed DO notion requires that the access patterns satisfy (?, ?)-differential privacy. We show that by adopting the relaxed DO notion, we can obtain efficient database Join mechanisms whose instance-specific performance approximately matches the insecure baseline, while still offering a meaningful notion of privacy to individual users. Complementing our upper bound results, we also prove new lower bounds regarding the performance of any DO Join algorithm. Differential obliviousness (DO) is a new notion and is a relatively unexplored territory. Following the pioneering investigations by Chan et al. and others, our work is among the very first to formally explore how DO can help overcome the worst-case performance curse of full obliviousness; moreover, we motivate our work with database applications. Our work shows new evidence why DO might be a promising notion, and opens up several exciting future directions

    Lower Bounds for Oblivious Near-Neighbor Search

    Get PDF
    We prove an Ω(dlgn/(lglgn)2)\Omega(d \lg n/ (\lg\lg n)^2) lower bound on the dynamic cell-probe complexity of statistically oblivious\mathit{oblivious} approximate-near-neighbor search (ANN\mathsf{ANN}) over the dd-dimensional Hamming cube. For the natural setting of d=Θ(logn)d = \Theta(\log n), our result implies an Ω~(lg2n)\tilde{\Omega}(\lg^2 n) lower bound, which is a quadratic improvement over the highest (non-oblivious) cell-probe lower bound for ANN\mathsf{ANN}. This is the first super-logarithmic unconditional\mathit{unconditional} lower bound for ANN\mathsf{ANN} against general (non black-box) data structures. We also show that any oblivious static\mathit{static} data structure for decomposable search problems (like ANN\mathsf{ANN}) can be obliviously dynamized with O(logn)O(\log n) overhead in update and query time, strengthening a classic result of Bentley and Saxe (Algorithmica, 1980).Comment: 28 page
    corecore