165 research outputs found
Sums of products of polynomials in few variables : lower bounds and polynomial identity testing
We study the complexity of representing polynomials as a sum of products of
polynomials in few variables. More precisely, we study representations of the
form such that each is
an arbitrary polynomial that depends on at most variables. We prove the
following results.
1. Over fields of characteristic zero, for every constant such that , we give an explicit family of polynomials , where
is of degree in variables, such that any
representation of the above type for with requires . This strengthens a recent result of Kayal and Saha
[KS14a] which showed similar lower bounds for the model of sums of products of
linear forms in few variables. It is known that any asymptotic improvement in
the exponent of the lower bounds (even for ) would separate VP
and VNP[KS14a].
2. We obtain a deterministic subexponential time blackbox polynomial identity
testing (PIT) algorithm for circuits computed by the above model when and
the individual degree of each variable in are at most and
for any constant . We get quasipolynomial running
time when . The PIT algorithm is obtained by combining our
lower bounds with the hardness-randomness tradeoffs developed in [DSY09, KI04].
To the best of our knowledge, this is the first nontrivial PIT algorithm for
this model (even for the case ), and the first nontrivial PIT algorithm
obtained from lower bounds for small depth circuits
Weakening Assumptions for Deterministic Subexponential Time Non-Singular Matrix Completion
In (Kabanets, Impagliazzo, 2004) it is shown how to decide the circuit
polynomial identity testing problem (CPIT) in deterministic subexponential
time, assuming hardness of some explicit multilinear polynomial family for
arithmetical circuits. In this paper, a special case of CPIT is considered,
namely low-degree non-singular matrix completion (NSMC). For this subclass of
problems it is shown how to obtain the same deterministic time bound, using a
weaker assumption in terms of determinantal complexity.
Hardness-randomness tradeoffs will also be shown in the converse direction,
in an effort to make progress on Valiant's VP versus VNP problem. To separate
VP and VNP, it is known to be sufficient to prove that the determinantal
complexity of the m-by-m permanent is . In this paper it is
shown, for an appropriate notion of explicitness, that the existence of an
explicit multilinear polynomial family with determinantal complexity
m^{\omega(\log m)}G_nO(n^{1/\sqrt{\log n}})G_nM(x)poly(n)ndet(M(x))$ is a multilinear polynomial
Algebraic Hardness Versus Randomness in Low Characteristic
We show that lower bounds for explicit constant-variate polynomials over fields of characteristic p > 0 are sufficient to derandomize polynomial identity testing over fields of characteristic p. In this setting, existing work on hardness-randomness tradeoffs for polynomial identity testing requires either the characteristic to be sufficiently large or the notion of hardness to be stronger than the standard syntactic notion of hardness used in algebraic complexity. Our results make no restriction on the characteristic of the field and use standard notions of hardness.
We do this by combining the Kabanets-Impagliazzo generator with a white-box procedure to take p-th roots of circuits computing a p-th power over fields of characteristic p. When the number of variables appearing in the circuit is bounded by some constant, this procedure turns out to be efficient, which allows us to bypass difficulties related to factoring circuits in characteristic p.
We also combine the Kabanets-Impagliazzo generator with recent "bootstrapping" results in polynomial identity testing to show that a sufficiently-hard family of explicit constant-variate polynomials yields a near-complete derandomization of polynomial identity testing. This result holds over fields of both zero and positive characteristic and complements a recent work of Guo, Kumar, Saptharishi, and Solomon, who obtained a slightly stronger statement over fields of characteristic zero
Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation
We present an efficient proof system for Multipoint Arithmetic Circuit
Evaluation: for every arithmetic circuit of size and
degree over a field , and any inputs ,
the Prover sends the Verifier the values and a proof of length, and
the Verifier tosses coins and can check the proof in about time, with probability of error less than .
For small degree , this "Merlin-Arthur" proof system (a.k.a. MA-proof
system) runs in nearly-linear time, and has many applications. For example, we
obtain MA-proof systems that run in time (for various ) for the
Permanent, Circuit-SAT for all sublinear-depth circuits, counting
Hamiltonian cycles, and infeasibility of - linear programs. In general,
the value of any polynomial in Valiant's class can be certified
faster than "exhaustive summation" over all possible assignments. These results
strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed
by Russell Impagliazzo and others.
We also give a three-round (AMA) proof system for quantified Boolean formulas
running in time, nearly-linear time MA-proof systems for
counting orthogonal vectors in a collection and finding Closest Pairs in the
Hamming metric, and a MA-proof system running in -time for
counting -cliques in graphs.
We point to some potential future directions for refuting the
Nondeterministic Strong ETH.Comment: 17 page
Polynomial-Time Pseudodeterministic Construction of Primes
A randomized algorithm for a search problem is *pseudodeterministic* if it
produces a fixed canonical solution to the search problem with high
probability. In their seminal work on the topic, Gat and Goldwasser posed as
their main open problem whether prime numbers can be pseudodeterministically
constructed in polynomial time.
We provide a positive solution to this question in the infinitely-often
regime. In more detail, we give an *unconditional* polynomial-time randomized
algorithm such that, for infinitely many values of , outputs a
canonical -bit prime with high probability. More generally, we prove
that for every dense property of strings that can be decided in polynomial
time, there is an infinitely-often pseudodeterministic polynomial-time
construction of strings satisfying . This improves upon a
subexponential-time construction of Oliveira and Santhanam.
Our construction uses several new ideas, including a novel bootstrapping
technique for pseudodeterministic constructions, and a quantitative
optimization of the uniform hardness-randomness framework of Chen and Tell,
using a variant of the Shaltiel--Umans generator
Deterministic Identity Testing Paradigms for Bounded Top-Fanin Depth-4 Circuits
Polynomial Identity Testing (PIT) is a fundamental computational problem. The famous depth-4 reduction (Agrawal & Vinay, FOCS\u2708) has made PIT for depth-4 circuits, an enticing pursuit. The largely open special-cases of sum-product-of-sum-of-univariates (?^[k] ? ? ?) and sum-product-of-constant-degree-polynomials (?^[k] ? ? ?^[?]), for constants k, ?, have been a source of many great ideas in the last two decades. For eg. depth-3 ideas (Dvir & Shpilka, STOC\u2705; Kayal & Saxena, CCC\u2706; Saxena & Seshadhri, FOCS\u2710, STOC\u2711); depth-4 ideas (Beecken, Mittmann & Saxena, ICALP\u2711; Saha,Saxena & Saptharishi, Comput.Compl.\u2713; Forbes, FOCS\u2715; Kumar & Saraf, CCC\u2716); geometric Sylvester-Gallai ideas (Kayal & Saraf, FOCS\u2709; Shpilka, STOC\u2719; Peleg & Shpilka, CCC\u2720, STOC\u2721). We solve two of the basic underlying open problems in this work.
We give the first polynomial-time PIT for ?^[k] ? ? ?. Further, we give the first quasipolynomial time blackbox PIT for both ?^[k] ? ? ? and ?^[k] ? ? ?^[?]. No subexponential time algorithm was known prior to this work (even if k = ? = 3). A key technical ingredient in all the three algorithms is how the logarithmic derivative, and its power-series, modify the top ?-gate to ?
Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)
Computational pseudorandomness studies the extent to which a random variable
looks like the uniform distribution according to a class of tests
. Computational entropy generalizes computational pseudorandomness by
studying the extent which a random variable looks like a \emph{high entropy}
distribution. There are different formal definitions of computational entropy
with different advantages for different applications. Because of this, it is of
interest to understand when these definitions are equivalent.
We consider three notions of computational entropy which are known to be
equivalent when the test class is closed under taking majorities.
This equivalence constitutes (essentially) the so-called \emph{dense model
theorem} of Green and Tao (and later made explicit by Tao-Zeigler, Reingold et
al., and Gowers). The dense model theorem plays a key role in Green and Tao's
proof that the primes contain arbitrarily long arithmetic progressions and has
since been connected to a surprisingly wide range of topics in mathematics and
computer science, including cryptography, computational complexity,
combinatorics and machine learning. We show that, in different situations where
is \emph{not} closed under majority, this equivalence fails. This in
turn provides examples where the dense model theorem is \emph{false}.Comment: 19 pages; to appear in ITCS 202
Factors of Low Individual Degree Polynomials
In [Kaltofen, 1989], Kaltofen proved the remarkable fact that multivariate polynomial factorization can be done efficiently, in randomized polynomial time. Still, more than twenty years after Kaltofen\u27s work, many questions remain unanswered regarding the complexity aspects of polynomial factorization, such as the question of whether factors of polynomials efficiently computed by arithmetic formulas also have small arithmetic formulas, asked in [Kopparty/Saraf/Shpilka,CCC\u2714], and the question of bounding the depth of the circuits computing the factors of a polynomial.
We are able to answer these questions in the affirmative for the interesting class of polynomials of bounded individual degrees, which contains polynomials such as the determinant and the permanent. We show that if P(x_1, ..., x_n) is a polynomial with individual degrees bounded by r that can be computed by a formula of size s and depth d, then any factor f(x_1, ..., x_n) of P(x_1, ..., x_n) can be computed by a formula of size poly((rn)^r, s) and depth d+5. This partially answers the question above posed in [Kopparty/Saraf/Shpilka,CCC\u2714], that asked if this result holds without the exponential dependence on r. Our work generalizes the main factorization theorem from Dvir et al. [Dvir/Shpilka/Yehudayoff,SIAM J. Comp., 2009], who proved it for the special case when the factors are of the form f(x_1, ..., x_n) = x_n - g(x_1, ..., x_n-1). Along the way, we introduce several new technical ideas that could be of independent interest when studying arithmetic circuits (or formulas)
Separation Between Read-once Oblivious Algebraic Branching Programs (ROABPs) and Multilinear Depth Three Circuits
We show an exponential separation between two well-studied models of algebraic computation, namely read-once oblivious algebraic branching programs (ROABPs) and multilinear depth three circuits. In particular we show the following:
1. There exists an explicit n-variate polynomial computable by linear sized multilinear depth three circuits (with only two product gates) such that every ROABP computing it requires 2^{Omega(n)} size.
2. Any multilinear depth three circuit computing IMM_{n,d} (the iterated matrix multiplication polynomial formed by multiplying d, n * n symbolic matrices) has n^{Omega(d)} size. IMM_{n,d} can be easily computed by a poly(n,d) sized ROABP.
3. Further, the proof of 2 yields an exponential separation between multilinear depth four and multilinear depth three circuits: There is an explicit n-variate, degree d polynomial computable by a poly(n,d) sized multilinear depth four circuit such that any multilinear depth three circuit computing it has size n^{Omega(d)}. This improves upon the quasi-polynomial separation result by Raz and Yehudayoff [2009] between these two models.
The hard polynomial in 1 is constructed using a novel application of expander graphs in conjunction with the evaluation dimension measure used previously in Nisan [1991], Raz [2006,2009], Raz and Yehudayoff [2009], and Forbes and Shpilka [2013], while 2 is proved via a new adaptation of the dimension of the partial derivatives measure used by Nisan and Wigderson [1997]. Our lower bounds hold over any field
- …