35,728 research outputs found

    How proofs are prepared at Camelot

    Full text link
    We study a design framework for robust, independently verifiable, and workload-balanced distributed algorithms working on a common input. An algorithm based on the framework is essentially a distributed encoding procedure for a Reed--Solomon code, which enables (a) robustness against byzantine failures with intrinsic error-correction and identification of failed nodes, and (b) independent randomized verification to check the entire computation for correctness, which takes essentially no more resources than each node individually contributes to the computation. The framework builds on recent Merlin--Arthur proofs of batch evaluation of Williams~[{\em Electron.\ Colloq.\ Comput.\ Complexity}, Report TR16-002, January 2016] with the observation that {\em Merlin's magic is not needed} for batch evaluation---mere Knights can prepare the proof, in parallel, and with intrinsic error-correction. The contribution of this paper is to show that in many cases the verifiable batch evaluation framework admits algorithms that match in total resource consumption the best known sequential algorithm for solving the problem. As our main result, we show that the kk-cliques in an nn-vertex graph can be counted {\em and} verified in per-node O(n(ω+ϵ)k/6)O(n^{(\omega+\epsilon)k/6}) time and space on O(n(ω+ϵ)k/6)O(n^{(\omega+\epsilon)k/6}) compute nodes, for any constant ϵ>0\epsilon>0 and positive integer kk divisible by 66, where 2ω<2.37286392\leq\omega<2.3728639 is the exponent of matrix multiplication. This matches in total running time the best known sequential algorithm, due to Ne{\v{s}}et{\v{r}}il and Poljak [{\em Comment.~Math.~Univ.~Carolin.}~26 (1985) 415--419], and considerably improves its space usage and parallelizability. Further results include novel algorithms for counting triangles in sparse graphs, computing the chromatic polynomial of a graph, and computing the Tutte polynomial of a graph.Comment: 42 p

    Parallel sparse interpolation using small primes

    Full text link
    To interpolate a supersparse polynomial with integer coefficients, two alternative approaches are the Prony-based "big prime" technique, which acts over a single large finite field, or the more recently-proposed "small primes" technique, which reduces the unknown sparse polynomial to many low-degree dense polynomials. While the latter technique has not yet reached the same theoretical efficiency as Prony-based methods, it has an obvious potential for parallelization. We present a heuristic "small primes" interpolation algorithm and report on a low-level C implementation using FLINT and MPI.Comment: Accepted to PASCO 201

    Multivariate sparse interpolation using randomized Kronecker substitutions

    Full text link
    We present new techniques for reducing a multivariate sparse polynomial to a univariate polynomial. The reduction works similarly to the classical and widely-used Kronecker substitution, except that we choose the degrees randomly based on the number of nonzero terms in the multivariate polynomial, that is, its sparsity. The resulting univariate polynomial often has a significantly lower degree than the Kronecker substitution polynomial, at the expense of a small number of term collisions. As an application, we give a new algorithm for multivariate interpolation which uses these new techniques along with any existing univariate interpolation algorithm.Comment: 21 pages, 2 tables, 1 procedure. Accepted to ISSAC 201

    Computational linear algebra over finite fields

    Get PDF
    We present here algorithms for efficient computation of linear algebra problems over finite fields

    Sampling algebraic sets in local intrinsic coordinates

    Full text link
    Numerical data structures for positive dimensional solution sets of polynomial systems are sets of generic points cut out by random planes of complimentary dimension. We may represent the linear spaces defined by those planes either by explicit linear equations or in parametric form. These descriptions are respectively called extrinsic and intrinsic representations. While intrinsic representations lower the cost of the linear algebra operations, we observe worse condition numbers. In this paper we describe the local adaptation of intrinsic coordinates to improve the numerical conditioning of sampling algebraic sets. Local intrinsic coordinates also lead to a better stepsize control. We illustrate our results with Maple experiments and computations with PHCpack on some benchmark polynomial systems.Comment: 13 pages, 2 figures, 2 algorithms, 2 table

    A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication

    Full text link
    This paper has two contributions. First, we propose a novel coded matrix multiplication technique called Generalized PolyDot codes that advances on existing methods for coded matrix multiplication under storage and communication constraints. This technique uses "garbage alignment," i.e., aligning computations in coded computing that are not a part of the desired output. Generalized PolyDot codes bridge between Polynomial codes and MatDot codes, trading off between recovery threshold and communication costs. Second, we demonstrate that Generalized PolyDot can be used for training large Deep Neural Networks (DNNs) on unreliable nodes prone to soft-errors. This requires us to address three additional challenges: (i) prohibitively large overhead of coding the weight matrices in each layer of the DNN at each iteration; (ii) nonlinear operations during training, which are incompatible with linear coding; and (iii) not assuming presence of an error-free master node, requiring us to architect a fully decentralized implementation without any "single point of failure." We allow all primary DNN training steps, namely, matrix multiplication, nonlinear activation, Hadamard product, and update steps as well as the encoding/decoding to be error-prone. We consider the case of mini-batch size B=1B=1, as well as B>1B>1, leveraging coded matrix-vector products, and matrix-matrix products respectively. The problem of DNN training under soft-errors also motivates an interesting, probabilistic error model under which a real number (P,Q)(P,Q) MDS code is shown to correct PQ1P-Q-1 errors with probability 11 as compared to PQ2\lfloor \frac{P-Q}{2} \rfloor for the more conventional, adversarial error model. We also demonstrate that our proposed strategy can provide unbounded gains in error tolerance over a competing replication strategy and a preliminary MDS-code-based strategy for both these error models.Comment: Presented in part at the IEEE International Symposium on Information Theory 2018 (Submission Date: Jan 12 2018); Currently under review at the IEEE Transactions on Information Theor
    corecore