8 research outputs found

    Exploring Differential Obliviousness

    Get PDF
    In a recent paper Chan et al. [SODA '19] proposed a relaxation of the notion of (full) memory obliviousness, which was introduced by Goldreich and Ostrovsky [J. ACM '96] and extensively researched by cryptographers. The new notion, differential obliviousness, requires that any two neighboring inputs exhibit similar memory access patterns, where the similarity requirement is that of differential privacy. Chan et al. demonstrated that differential obliviousness allows achieving improved efficiency for several algorithmic tasks, including sorting, merging of sorted lists, and range query data structures. In this work, we continue the exploration and mapping of differential obliviousness, focusing on algorithms that do not necessarily examine all their input. This choice is motivated by the fact that the existence of logarithmic overhead ORAM protocols implies that differential obliviousness can yield at most a logarithmic improvement in efficiency for computations that need to examine all their input. In particular, we explore property testing, where we show that differential obliviousness yields an almost linear improvement in overhead in the dense graph model, and at most quadratic improvement in the bounded degree model. We also explore tasks where a non-oblivious algorithm would need to explore different portions of the input, where the latter would depend on the input itself, and where we show that such a behavior can be maintained under differential obliviousness, but not under full obliviousness. Our examples suggest that there would be benefits in further exploring which class of computational tasks are amenable to differential obliviousness

    An adaptivity hierarchy theorem for property testing

    Get PDF
    Adaptivity is known to play a crucial role in property testing. In particular, there exist properties for which there is an exponential gap between the power of adaptive testing algorithms, wherein each query may be determined by the answers received to prior queries, and their non-adaptive counterparts, in which all queries are independent of answers obtained from previous queries. In this work, we investigate the role of adaptivity in property testing at a finer level. We first quantify the degree of adaptivity of a testing algorithm by considering the number of "rounds of adaptivity" it uses. More accurately, we say that a tester is k-(round) adaptive if it makes queries in k+1 rounds, where the queries in the i'th round may depend on the answers obtained in the previous i-1 rounds. Then, we ask the following question: Does the power of testing algorithms smoothly grow with the number of rounds of adaptivity? We provide a positive answer to the foregoing question by proving an adaptivity hierarchy theorem for property testing. Specifically, our main result shows that for every n in N and 0 <= k <= n^{0.99} there exists a property Pi_{n,k} of functions for which (1) there exists a k-adaptive tester for Pi_{n,k} with query complexity tilde O(k), yet (2) any (k-1)-adaptive tester for Pi_{n,k} must make Omega(n) queries. In addition, we show that such a qualitative adaptivity hierarchy can be witnessed for testing natural properties of graphs

    On sample-based testers

    Get PDF
    The standard definition of property testing endows the tester with the ability to make arbitrary queries to “elements ” of the tested object. In contrast, sample-based testers only obtain independently distributed elements (a.k.a. labeled samples) of the tested object. While samplebased testers were defined by Goldreich, Goldwasser, and Ron (JACM 1998), most research in property testing is focused on query-based testers. In this work, we advance the study of sample-based property testers by providing several general positive results as well as by revealing relations between variants of this testing model. In particular: • We show that certain types of query-based testers yield sample-based testers of sublinear sample complexity. For example, this holds for a natural class of proximity oblivious testers. • We study the relation between distribution-free sample-based testers and one-sided error sample-based testers w.r.t the uniform distribution. While most of this work ignores the time complexity of testing, one part of it does focus on this aspect. The main result in this part is a sublinear-time sample-based tester for k-Colorability, for any k ≥ 2

    On Proximity-Oblivious Testing

    Full text link

    Analyzing massive datasets with missing entries: models and algorithms

    Get PDF
    We initiate a systematic study of computational models to analyze algorithms for massive datasets with missing or erased entries and study the relationship of our models with existing algorithmic models for large datasets. We focus on algorithms whose inputs are naturally represented as functions, codewords, or graphs. First, we generalize the property testing model, one of the most widely studied models of sublinear-time algorithms, to account for the presence of adversarially erased function values. We design efficient erasure-resilient property testing algorithms for several fundamental properties of real-valued functions such as monotonicity, Lipschitz property, convexity, and linearity. We then investigate the problems of local decoding and local list decoding of codewords containing erasures. We show that, in some cases, these problems are strictly easier than the corresponding problems of decoding codewords containing errors. Moreover, we use this understanding to show a separation between our erasure-resilient property testing model and the (error) tolerant property testing model. The philosophical message of this separation is that errors occurring in large datasets are, in general, harder to deal with, than erasures. Finally, we develop models and notions to reason about algorithms that are intended to run on large graphs with missing edges. While running algorithms on large graphs containing several missing edges, it is desirable to output solutions that are close to the solutions output when there are no missing edges. With this motivation, we define average sensitivity, a robustness metric for graph algorithms. We discuss various useful features of our definition and design approximation algorithms with good average sensitivity bounds for several optimization problems on graphs. We also define a model of erasure-resilient sublinear-time graph algorithms and design an efficient algorithm for testing connectivity of graphs
    corecore