2,255 research outputs found
Fast Algorithms for Parameterized Problems with Relaxed Disjointness Constraints
In parameterized complexity, it is a natural idea to consider different
generalizations of classic problems. Usually, such generalization are obtained
by introducing a "relaxation" variable, where the original problem corresponds
to setting this variable to a constant value. For instance, the problem of
packing sets of size at most into a given universe generalizes the Maximum
Matching problem, which is recovered by taking . Most often, the
complexity of the problem increases with the relaxation variable, but very
recently Abasi et al. have given a surprising example of a problem ---
-Simple -Path --- that can be solved by a randomized algorithm with
running time . That is, the complexity of the
problem decreases with . In this paper we pursue further the direction
sketched by Abasi et al. Our main contribution is a derandomization tool that
provides a deterministic counterpart of the main technical result of Abasi et
al.: the algorithm for -Monomial
Detection, which is the problem of finding a monomial of total degree and
individual degrees at most in a polynomial given as an arithmetic circuit.
Our technique works for a large class of circuits, and in particular it can be
used to derandomize the result of Abasi et al. for -Simple -Path. On our
way to this result we introduce the notion of representative sets for
multisets, which may be of independent interest. Finally, we give two more
examples of problems that were already studied in the literature, where the
same relaxation phenomenon happens. The first one is a natural relaxation of
the Set Packing problem, where we allow the packed sets to overlap at each
element at most times. The second one is Degree Bounded Spanning Tree,
where we seek for a spanning tree of the graph with a small maximum degree
On Hitting-Set Generators for Polynomials That Vanish Rarely
The problem of constructing hitting-set generators for polynomials of low degree is fundamental in complexity theory and has numerous well-known applications. We study the following question, which is a relaxation of this problem: Is it easier to construct a hitting-set generator for polynomials p: ?? ? ? of degree d if we are guaranteed that the polynomial vanishes on at most an ? > 0 fraction of its inputs? We will specifically be interested in tiny values of ?? d/|?|. This question was first considered by Goldreich and Wigderson (STOC 2014), who studied a specific setting geared for a particular application, and another specific setting was later studied by the third author (CCC 2017).
In this work our main interest is a systematic study of the relaxed problem, in its general form, and we prove results that significantly improve and extend the two previously-known results. Our contributions are of two types:
- Over fields of size 2 ? |?| ? poly(n), we show that the seed length of any hitting-set generator for polynomials of degree d ? n^{.49} that vanish on at most ? = |?|^{-t} of their inputs is at least ?((d/t)?log(n)).
- Over ??, we show that there exists a (non-explicit) hitting-set generator for polynomials of degree d ? n^{.99} that vanish on at most ? = |?|^{-t} of their inputs with seed length O((d-t)?log(n)). We also show a polynomial-time computable hitting-set generator with seed length O((d-t)?(2^{d-t}+log(n))).
In addition, we prove that the problem we study is closely related to the following question: "Does there exist a small set S ? ?? whose degree-d closure is very large?", where the degree-d closure of S is the variety induced by the set of degree-d polynomials that vanish on S
Simulation Theorems via Pseudorandom Properties
We generalize the deterministic simulation theorem of Raz and McKenzie
[RM99], to any gadget which satisfies certain hitting property. We prove that
inner-product and gap-Hamming satisfy this property, and as a corollary we
obtain deterministic simulation theorem for these gadgets, where the gadget's
input-size is logarithmic in the input-size of the outer function. This answers
an open question posed by G\"{o}\"{o}s, Pitassi and Watson [GPW15]. Our result
also implies the previous results for the Indexing gadget, with better
parameters than was previously known. A preliminary version of the results
obtained in this work appeared in [CKL+17]
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics
Quantum computing is powerful because unitary operators describing the
time-evolution of a quantum system have exponential size in terms of the number
of qubits present in the system. We develop a new "Singular value
transformation" algorithm capable of harnessing this exponential advantage,
that can apply polynomial transformations to the singular values of a block of
a unitary, generalizing the optimal Hamiltonian simulation results of Low and
Chuang. The proposed quantum circuits have a very simple structure, often give
rise to optimal algorithms and have appealing constant factors, while usually
only use a constant number of ancilla qubits. We show that singular value
transformation leads to novel algorithms. We give an efficient solution to a
certain "non-commutative" measurement problem and propose a new method for
singular value estimation. We also show how to exponentially improve the
complexity of implementing fractional queries to unitaries with a gapped
spectrum. Finally, as a quantum machine learning application we show how to
efficiently implement principal component regression. "Singular value
transformation" is conceptually simple and efficient, and leads to a unified
framework of quantum algorithms incorporating a variety of quantum speed-ups.
We illustrate this by showing how it generalizes a number of prominent quantum
algorithms, including: optimal Hamiltonian simulation, implementing the
Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude
amplification, robust oblivious amplitude amplification, fast QMA
amplification, fast quantum OR lemma, certain quantum walk results and several
quantum machine learning algorithms. In order to exploit the strengths of the
presented method it is useful to know its limitations too, therefore we also
prove a lower bound on the efficiency of singular value transformation, which
often gives optimal bounds.Comment: 67 pages, 1 figur
Polynomial Identity Testing for Low Degree Polynomials with Optimal Randomness
We give a randomized polynomial time algorithm for polynomial identity testing for the class of n-variate poynomials of degree bounded by d over a field ?, in the blackbox setting.
Our algorithm works for every field ? with | ? | ? d+1, and uses only d log n + log (1/ ?) + O(d log log n) random bits to achieve a success probability 1 - ? for some ? > 0. In the low degree regime that is d ? n, it hits the information theoretic lower bound and differs from it only in the lower order terms. Previous best known algorithms achieve the number of random bits (Guruswami-Xing, CCC\u2714 and Bshouty, ITCS\u2714) that are constant factor away from our bound. Like Bshouty, we use Sidon sets for our algorithm. However, we use a new construction of Sidon sets to achieve the improved bound.
We also collect two simple constructions of hitting sets with information theoretically optimal size against the class of n-variate, degree d polynomials. Our contribution is that we give new, very simple proofs for both the constructions
Algebraic Methods in Computational Complexity
Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test
are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques
- …