42 research outputs found

    Is There an Oblivious RAM Lower Bound?

    Get PDF
    An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible. We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a “balls and bins” fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data. We prove that for the offline case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs

    Is there an Oblivious RAM Lower Bound for Online Reads?

    Get PDF
    Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an O(logn)O(\log n) overhead per access, where nn is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a ``balls and bins\u27\u27 model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond ``balls and bins\u27\u27? The work of Boyle and Naor (ITCS \u2716) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with o(logn)o(\log n) overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO \u2718) shows that there indeed is an Ω(logn)\Omega(\log n) lower bound for general online ORAM. This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an o(logn)o(\log n) overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds

    Lower Bounds for Oblivious Near-Neighbor Search

    Get PDF
    We prove an Ω(dlgn/(lglgn)2)\Omega(d \lg n/ (\lg\lg n)^2) lower bound on the dynamic cell-probe complexity of statistically oblivious\mathit{oblivious} approximate-near-neighbor search (ANN\mathsf{ANN}) over the dd-dimensional Hamming cube. For the natural setting of d=Θ(logn)d = \Theta(\log n), our result implies an Ω~(lg2n)\tilde{\Omega}(\lg^2 n) lower bound, which is a quadratic improvement over the highest (non-oblivious) cell-probe lower bound for ANN\mathsf{ANN}. This is the first super-logarithmic unconditional\mathit{unconditional} lower bound for ANN\mathsf{ANN} against general (non black-box) data structures. We also show that any oblivious static\mathit{static} data structure for decomposable search problems (like ANN\mathsf{ANN}) can be obliviously dynamized with O(logn)O(\log n) overhead in update and query time, strengthening a classic result of Bentley and Saxe (Algorithmica, 1980).Comment: 28 page

    Yes, There is an Oblivious RAM Lower Bound!

    Get PDF
    An Oblivious RAM (ORAM) introduced by Goldreich and Ostrovsky [JACM\u2796] is a (possibly randomized) RAM, for which the memory access pattern reveals no information about the operations performed. The main performance metric of an ORAM is the bandwidth overhead, i.e., the multiplicative factor extra memory blocks that must be accessed to hide the operation sequence. In their seminal paper introducing the ORAM, Goldreich and Ostrovsky proved an amortized Ω(lgn)\Omega(\lg n) bandwidth overhead lower bound for ORAMs with memory size nn. Their lower bound is very strong in the sense that it applies to the ``offline\u27\u27 setting in which the ORAM knows the entire sequence of operations ahead of time. However, as pointed out by Boyle and Naor [ITCS\u2716] in the paper ``Is there an oblivious RAM lower bound?\u27\u27, there are two caveats with the lower bound of Goldreich and Ostrovsky: (1) it only applies to ``balls in bins\u27\u27 algorithms, i.e., algorithms where the ORAM may only shuffle blocks around and not apply any sophisticated encoding of the data, and (2), it only applies to statistically secure constructions. Boyle and Naor showed that removing the ``balls in bins\u27\u27 assumption would result in super linear lower bounds for sorting circuits, a long standing open problem in circuit complexity. As a way to circumventing this barrier, they also proposed a notion of an ``online\u27\u27 ORAM, which is an ORAM that remains secure even if the operations arrive in an online manner. They argued that most known ORAM constructions work in the online setting as well. Our contribution is an Ω(lgn)\Omega(\lg n) lower bound on the bandwidth overhead of any online ORAM, even if we require only computational security and allow arbitrary representations of data, thus greatly strengthening the lower bound of Goldreich and Ostrovsky in the online setting. Our lower bound applies to ORAMs with memory size nn and any word size r1r \geq 1. The bound therefore asymptotically matches the known upper bounds when r=Ω(lg2n)r = \Omega(\lg^2 n)

    Computationally Data-Independent Memory Hard Functions

    Get PDF
    Memory hard functions (MHFs) are an important cryptographic primitive that are used to design egalitarian proofs of work and in the construction of moderately expensive key-derivation functions resistant to brute-force attacks. Broadly speaking, MHFs can be divided into two categories: data-dependent memory hard functions (dMHFs) and data-independent memory hard functions (iMHFs). iMHFs are resistant to certain side-channel attacks as the memory access pattern induced by the honest evaluation algorithm is independent of the potentially sensitive input e.g., password. While dMHFs are potentially vulnerable to side-channel attacks (the induced memory access pattern might leak useful information to a brute-force attacker), they can achieve higher cumulative memory complexity (CMC) in comparison than an iMHF. In particular, any iMHF that can be evaluated in N steps on a sequential machine has CMC at most ?((N^2 log log N)/log N). By contrast, the dMHF scrypt achieves maximal CMC ?(N^2) - though the CMC of scrypt would be reduced to just ?(N) after a side-channel attack. In this paper, we introduce the notion of computationally data-independent memory hard functions (ciMHFs). Intuitively, we require that memory access pattern induced by the (randomized) ciMHF evaluation algorithm appears to be independent from the standpoint of a computationally bounded eavesdropping attacker - even if the attacker selects the initial input. We then ask whether it is possible to circumvent known upper bound for iMHFs and build a ciMHF with CMC ?(N^2). Surprisingly, we answer the question in the affirmative when the ciMHF evaluation algorithm is executed on a two-tiered memory architecture (RAM/Cache). We introduce the notion of a k-restricted dynamic graph to quantify the continuum between unrestricted dMHFs (k=n) and iMHFs (k=1). For any ? > 0 we show how to construct a k-restricted dynamic graph with k=?(N^(1-?)) that provably achieves maximum cumulative pebbling cost ?(N^2). We can use k-restricted dynamic graphs to build a ciMHF provided that cache is large enough to hold k hash outputs and the dynamic graph satisfies a certain property that we call "amenable to shuffling". In particular, we prove that the induced memory access pattern is indistinguishable to a polynomial time attacker who can monitor the locations of read/write requests to RAM, but not cache. We also show that when k=o(N^(1/log log N))then any k-restricted graph with constant indegree has cumulative pebbling cost o(N^2). Our results almost completely characterize the spectrum of k-restricted dynamic graphs

    What Storage Access Privacy is Achievable with Small Overhead?

    Get PDF
    Oblivious RAM (ORAM) and private information retrieval (PIR) are classic cryptographic primitives used to hide the access pattern to data whose storage has been outsourced to an untrusted server. Unfortunately, both primitives require considerable overhead compared to plaintext access. For large-scale storage infrastructure with highly frequent access requests, the degradation in response time and the exorbitant increase in resource costs incurred by either ORAM or PIR prevent their usage. In an ideal scenario, a privacy-preserving storage protocols with small overhead would be implemented for these heavily trafficked storage systems to avoid negatively impacting either performance and/or costs. In this work, we study the problem of the best $\mathit{storage\ access\ privacy}thatisachievablewithonly that is achievable with only \mathit{small\ overhead}overplaintextaccess.Toanswerthisquestion,weconsider over plaintext access. To answer this question, we consider \mathit{differential\ privacy\ access}whichisageneralizationofthe which is a generalization of the \mathit{oblivious\ access}securitynotionthatareconsideredbyORAMandPIR.Quitesurprisingly,wepresentstrongevidencethatconstantoverheadstorageschemesmayonlybeachievedwithprivacybudgetsof security notion that are considered by ORAM and PIR. Quite surprisingly, we present strong evidence that constant overhead storage schemes may only be achieved with privacy budgets of \epsilon = \Omega(\log n).WepresentasymptoticallyoptimalconstructionsfordifferentiallyprivatevariantsofbothORAMandPIRwithprivacybudgets. We present asymptotically optimal constructions for differentially private variants of both ORAM and PIR with privacy budgets \epsilon = \Theta(\log n)withonly with only O(1)overhead.Inaddition,weconsideramorecomplexstorageprimitivecalledkeyvaluestorageinwhichdataisindexedbykeysfromalargeuniverse(asopposedtoconsecutiveintegersinORAMandPIR).Wepresentadifferentiallyprivatekeyvaluestorageschemewith overhead. In addition, we consider a more complex storage primitive called key-value storage in which data is indexed by keys from a large universe (as opposed to consecutive integers in ORAM and PIR). We present a differentially private key-value storage scheme with \epsilon = \Theta(\log n)and and O(\log\log n)$ overhead. This construction uses a new oblivious, two-choice hashing scheme that may be of independent interest.Comment: To appear at PODS'1

    Lower Bounds for Multi-Server Oblivious RAMs

    Get PDF
    In this work, we consider the construction of oblivious RAMs (ORAM) in a setting with multiple servers and the adversary may corrupt a subset of the servers. We present an Ω(logn)\Omega(\log n) overhead lower bound for any kk-server ORAM that limits any PPT adversary to distinguishing advantage at most 1/4k1/4k when only one server is corrupted. In other words, if one insists on negligible distinguishing advantage, then multi-server ORAMs cannot be faster than single-server ORAMs even with polynomially many servers of which only one unknown server is corrupted. Our results apply to ORAMs that may err with probability at most 1/1281/128 as well as scenarios where the adversary corrupts larger subsets of servers. We also extend our lower bounds to other important data structures including oblivious stacks, queues, deques, priority queues and search trees

    Random-Index Oblivious RAM

    Get PDF
    We study the notion of Random-index ORAM (RORAM), which is a weak form of ORAM where the Client is limited to asking for (and possibly modifying) random elements of the NN-items memory, rather than specific ones. That is, whenever the client issues a request, it gets in return a pair (r,xr)(r,x_r) where rR[N]r\in_R[N] is a random index and xrx_r is the content of the rr-th memory item. Then, the client can also modify the content to some new value x2˘7rx\u27_r. We first argue that the limited functionality of RORAM still suffices for certain applications. These include various applications of sampling (or sub-sampling), and in particular the very-large-scale MPC application in the setting of~ Benhamouda et al. (TCC 2020). Clearly, RORAM can be implemented using any ORAM scheme (by the Client selecting the random rr\u27s by himself), but the hope is that the limited functionality of RORAM can make it faster and easier to implement than ORAM. Indeed, our main contributions are several RORAM schemes (both of the hierarchical-type and the tree-type) of lighter complexity than that of ORAM

    Differentially Oblivious Turing Machines

    Get PDF
    Oblivious RAM (ORAM) is a machinery that protects any RAM from leaking information about its secret input by observing only the access pattern. It is known that every ORAM must incur a logarithmic overhead compared to the non-oblivious RAM. In fact, even the seemingly weaker notion of differential obliviousness, which intuitively "protects" a single access by guaranteeing that the observed access pattern for every two "neighboring" logical access sequences satisfy (?,?)-differential privacy, is subject to a logarithmic lower bound. In this work, we show that any Turing machine computation can be generically compiled into a differentially oblivious one with only doubly logarithmic overhead. More precisely, given a Turing machine that makes N transitions, the compiled Turing machine makes O(N ? log log N) transitions in total and the physical head movements sequence satisfies (?,?)-differential privacy (for a constant ? and a negligible ?). We additionally show that ?(log log N) overhead is necessary in a natural range of parameters (and in the balls and bins model). As a corollary, we show that there exist natural data structures such as stack and queues (supporting online operations) on N elements for which there is a differentially oblivious implementation on a Turing machine incurring amortized O(log log N) overhead per operation, while it is known that any oblivious implementation must consume ?(log N) operations unconditionally even on a RAM. Therefore, we obtain the first unconditional separation between obliviousness and differential obliviousness in the most natural setting of parameters where ? is a constant and ? is negligible. Before this work, such a separation was only known in the balls and bins model. Note that the lower bound applies in the RAM model while our upper bound is in the Turing machine model, making our separation stronger
    corecore