3,151 research outputs found

    On the Limits of Depth Reduction at Depth 3 Over Small Finite Fields

    Full text link
    Recently, Gupta et.al. [GKKS2013] proved that over Q any nO(1)n^{O(1)}-variate and nn-degree polynomial in VP can also be computed by a depth three ΣΠΣ\Sigma\Pi\Sigma circuit of size 2O(nlog3/2n)2^{O(\sqrt{n}\log^{3/2}n)}. Over fixed-size finite fields, Grigoriev and Karpinski proved that any ΣΠΣ\Sigma\Pi\Sigma circuit that computes DetnDet_n (or PermnPerm_n) must be of size 2Ω(n)2^{\Omega(n)} [GK1998]. In this paper, we prove that over fixed-size finite fields, any ΣΠΣ\Sigma\Pi\Sigma circuit for computing the iterated matrix multiplication polynomial of nn generic matrices of size n×nn\times n, must be of size 2Ω(nlogn)2^{\Omega(n\log n)}. The importance of this result is that over fixed-size fields there is no depth reduction technique that can be used to compute all the nO(1)n^{O(1)}-variate and nn-degree polynomials in VP by depth 3 circuits of size 2o(nlogn)2^{o(n\log n)}. The result [GK1998] can only rule out such a possibility for depth 3 circuits of size 2o(n)2^{o(n)}. We also give an example of an explicit polynomial (NWn,ϵ(X)NW_{n,\epsilon}(X)) in VNP (not known to be in VP), for which any ΣΠΣ\Sigma\Pi\Sigma circuit computing it (over fixed-size fields) must be of size 2Ω(nlogn)2^{\Omega(n\log n)}. The polynomial we consider is constructed from the combinatorial design. An interesting feature of this result is that we get the first examples of two polynomials (one in VP and one in VNP) such that they have provably stronger circuit size lower bounds than Permanent in a reasonably strong model of computation. Next, we prove that any depth 4 ΣΠ[O(n)]ΣΠ[n]\Sigma\Pi^{[O(\sqrt{n})]}\Sigma\Pi^{[\sqrt{n}]} circuit computing NWn,ϵ(X)NW_{n,\epsilon}(X) (over any field) must be of size 2Ω(nlogn)2^{\Omega(\sqrt{n}\log n)}. To the best of our knowledge, the polynomial NWn,ϵ(X)NW_{n,\epsilon}(X) is the first example of an explicit polynomial in VNP such that it requires 2Ω(nlogn)2^{\Omega(\sqrt{n}\log n)} size depth four circuits, but no known matching upper bound

    Neural computation of arithmetic functions

    Get PDF
    A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n -bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions

    Fast arithmetic computing with neural networks

    Get PDF
    The authors introduce a restricted model of a neuron which is more practical as a model of computation then the classical model of a neuron. The authors define a model of neural networks as a feedforward network of such neurons. Whereas any logic circuit of polynomial size (in n) that computes the product of two n-bit numbers requires unbounded delay, such computations can be done in a neural network with constant delay. The authors improve some known results by showing that the product of two n-bit numbers and sorting of n n-bit numbers can both be computed by a polynomial size neural network using only four unit delays, independent of n . Moreover, the weights of each threshold element in the neural networks require only O(log n)-bit (instead of n-bit) accuracy

    Three Puzzles on Mathematics, Computation, and Games

    Full text link
    In this lecture I will talk about three mathematical puzzles involving mathematics and computation that have preoccupied me over the years. The first puzzle is to understand the amazing success of the simplex algorithm for linear programming. The second puzzle is about errors made when votes are counted during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure

    Lower Bounds for Monotone Counting Circuits

    Full text link
    A {+,x}-circuit counts a given multivariate polynomial f, if its values on 0-1 inputs are the same as those of f; on other inputs the circuit may output arbitrary values. Such a circuit counts the number of monomials of f evaluated to 1 by a given 0-1 input vector (with multiplicities given by their coefficients). A circuit decides ff if it has the same 0-1 roots as f. We first show that some multilinear polynomials can be exponentially easier to count than to compute them, and can be exponentially easier to decide than to count them. Then we give general lower bounds on the size of counting circuits.Comment: 20 page

    Faster all-pairs shortest paths via circuit complexity

    Full text link
    We present a new randomized method for computing the min-plus product (a.k.a., tropical product) of two n×nn \times n matrices, yielding a faster algorithm for solving the all-pairs shortest path problem (APSP) in dense nn-node directed graphs with arbitrary edge weights. On the real RAM, where additions and comparisons of reals are unit cost (but all other operations have typical logarithmic cost), the algorithm runs in time n32Ω(logn)1/2\frac{n^3}{2^{\Omega(\log n)^{1/2}}} and is correct with high probability. On the word RAM, the algorithm runs in n3/2Ω(logn)1/2+n2+o(1)logMn^3/2^{\Omega(\log n)^{1/2}} + n^{2+o(1)}\log M time for edge weights in ([0,M]Z){}([0,M] \cap {\mathbb Z})\cup\{\infty\}. Prior algorithms used either n3/(logcn)n^3/(\log^c n) time for various c2c \leq 2, or O(Mαnβ)O(M^{\alpha}n^{\beta}) time for various α>0\alpha > 0 and β>2\beta > 2. The new algorithm applies a tool from circuit complexity, namely the Razborov-Smolensky polynomials for approximately representing AC0[p]{\sf AC}^0[p] circuits, to efficiently reduce a matrix product over the (min,+)(\min,+) algebra to a relatively small number of rectangular matrix products over F2{\mathbb F}_2, each of which are computable using a particularly efficient method due to Coppersmith. We also give a deterministic version of the algorithm running in n3/2logδnn^3/2^{\log^{\delta} n} time for some δ>0\delta > 0, which utilizes the Yao-Beigel-Tarui translation of AC0[m]{\sf AC}^0[m] circuits into "nice" depth-two circuits.Comment: 24 pages. Updated version now has slightly faster running time. To appear in ACM Symposium on Theory of Computing (STOC), 201
    corecore