51,317 research outputs found

    Exploring Differential Obliviousness

    Get PDF
    In a recent paper, Chan et al. [SODA \u2719] proposed a relaxation of the notion of (full) memory obliviousness, which was introduced by Goldreich and Ostrovsky [J. ACM \u2796] and extensively researched by cryptographers. The new notion, differential obliviousness, requires that any two neighboring inputs exhibit similar memory access patterns, where the similarity requirement is that of differential privacy. Chan et al. demonstrated that differential obliviousness allows achieving improved efficiency for several algorithmic tasks, including sorting, merging of sorted lists, and range query data structures. In this work, we continue the exploration of differential obliviousness, focusing on algorithms that do not necessarily examine all their input. This choice is motivated by the fact that the existence of logarithmic overhead ORAM protocols implies that differential obliviousness can yield at most a logarithmic improvement in efficiency for computations that need to examine all their input. In particular, we explore property testing, where we show that differential obliviousness yields an almost linear improvement in overhead in the dense graph model, and at most quadratic improvement in the bounded degree model. We also explore tasks where a non-oblivious algorithm would need to explore different portions of the input, where the latter would depend on the input itself, and where we show that such a behavior can be maintained under differential obliviousness, but not under full obliviousness. Our examples suggest that there would be benefits in further exploring which class of computational tasks are amenable to differential obliviousness

    Differentially private queries in crowdsourced databases for net neutrality violations detection

    Get PDF
    Lawmakers and regulatory bodies around the world are asserting Network Neutrality as a fundamental property of broadband Internet access. Since neutrality implies a comparison between different users and different ISPs, this opens the question of how to measure net neutrality in a privacy-friendly manner. This work describes a system in which users convey throughput measurements for the different services they use to a crowd-sourced database and submit queries testing their measurements against the hypothesis of a neutral network. The usage of crowd sourced databases poses potential privacy problems, because users submit data that may possibly disclose information about their own habits. This leaves the door open to information leakages regarding the content of the measurement database. Randomized sampling and suppression of small clusters can provide a good tradeoff between usefulness of the system, in terms of precision and recall of discriminated users, and privacy, in terms of differential privacy
    • …
    corecore