832 research outputs found

    Algebraic Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques

    Lower Bounds on Quantum Query Complexity

    Full text link
    Shor's and Grover's famous quantum algorithms for factoring and searching show that quantum computers can solve certain computational problems significantly faster than any classical computer. We discuss here what quantum computers_cannot_ do, and specifically how to prove limits on their computational power. We cover the main known techniques for proving lower bounds, and exemplify and compare the methods.Comment: survey, 23 page

    Graph Properties in Node-Query Setting: Effect of Breaking Symmetry

    Get PDF
    The query complexity of graph properties is well-studied when queries are on the edges. We investigate the same when queries are on the nodes. In this setting a graph G = (V,E) on n vertices and a property P are given. A black-box access to an unknown subset S of V is provided via queries of the form "Does i belong to S?". We are interested in the minimum number of queries needed in the worst case in order to determine whether G[S] - the subgraph of G induced on S - satisfies P. Our primary motivation to study this model comes from the fact that it allows us to initiate a systematic study of breaking symmetry in the context of query complexity of graph properties. In particular, we focus on the hereditary graph properties - properties that are closed under deletion of vertices as well as edges. The famous Evasiveness Conjecture asserts that even with a minimal symmetry assumption on G, namely that of vertex-transitivity, the query complexity for any hereditary graph property in our setting is the worst possible, i.e., n. We show that in the absence of any symmetry on G it can fall as low as O(n^{1/(d + 1)}) where d denotes the minimum possible degree of a minimal forbidden sub-graph for P. In particular, every hereditary property benefits at least quadratically. The main question left open is: Can it go exponentially low for some hereditary property? We show that the answer is no for any hereditary property with finitely many forbidden subgraphs by exhibiting a bound of Omega(n^{1/k}) for a constant k depending only on the property. For general ones we rule out the possibility of the query complexity falling down to constant by showing Omega(log(n)*log(log(n))) bound. Interestingly, our lower bound proofs rely on the famous Sunflower Lemma due to Erdos and Rado

    A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers

    Full text link
    We prove a strong composition theorem for junta complexity and show how such theorems can be used to generically boost the performance of property testers. The ε\varepsilon-approximate junta complexity of a function ff is the smallest integer rr such that ff is ε\varepsilon-close to a function that depends only on rr variables. A strong composition theorem states that if ff has large ε\varepsilon-approximate junta complexity, then g∘fg \circ f has even larger ε′\varepsilon'-approximate junta complexity, even for ε′≫ε\varepsilon' \gg \varepsilon. We develop a fairly complete understanding of this behavior, proving that the junta complexity of g∘fg \circ f is characterized by that of ff along with the multivariate noise sensitivity of gg. For the important case of symmetric functions gg, we relate their multivariate noise sensitivity to the simpler and well-studied case of univariate noise sensitivity. We then show how strong composition theorems yield boosting algorithms for property testers: with a strong composition theorem for any class of functions, a large-distance tester for that class is immediately upgraded into one for small distances. Combining our contributions yields a booster for junta testers, and with it new implications for junta testing. This is the first boosting-type result in property testing, and we hope that the connection to composition theorems adds compelling motivation to the study of both topics.Comment: 44 pages, 1 figure, FOCS 202

    The Information Complexity of Hamming Distance

    Get PDF
    The Hamming distance function Ham_{n,d} returns 1 on all pairs of inputs x and y that differ in at most d coordinates and returns 0 otherwise. We initiate the study of the information complexity of the Hamming distance function. We give a new optimal lower bound for the information complexity of the Ham_{n,d} function in the small-error regime where the protocol is required to err with probability at most epsilon < d/n. We also give a new conditional lower bound for the information complexity of Ham_{n,d} that is optimal in all regimes. These results imply the first new lower bounds on the communication complexity of the Hamming distance function for the shared randomness two-way communication model since Pang and El-Gamal (1986). These results also imply new lower bounds in the areas of property testing and parity decision tree complexity

    Sublinear Estimation of Weighted Matchings in Dynamic Data Streams

    Full text link
    This paper presents an algorithm for estimating the weight of a maximum weighted matching by augmenting any estimation routine for the size of an unweighted matching. The algorithm is implementable in any streaming model including dynamic graph streams. We also give the first constant estimation for the maximum matching size in a dynamic graph stream for planar graphs (or any graph with bounded arboricity) using O~(n4/5)\tilde{O}(n^{4/5}) space which also extends to weighted matching. Using previous results by Kapralov, Khanna, and Sudan (2014) we obtain a polylog(n)\mathrm{polylog}(n) approximation for general graphs using polylog(n)\mathrm{polylog}(n) space in random order streams, respectively. In addition, we give a space lower bound of Ω(n1−ε)\Omega(n^{1-\varepsilon}) for any randomized algorithm estimating the size of a maximum matching up to a 1+O(ε)1+O(\varepsilon) factor for adversarial streams
    • …
    corecore