9,429 research outputs found

    Nearly Optimal Sparse Polynomial Multiplication

    Full text link
    In the sparse polynomial multiplication problem, one is asked to multiply two sparse polynomials f and g in time that is proportional to the size of the input plus the size of the output. The polynomials are given via lists of their coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a nearly optimal algorithm when the coefficients are positive, and Arnold and Roche (ISSAC 15) devised an algorithm running in time proportional to the "structural sparsity" of the product, i.e. the set supp(F)+supp(G). The latter algorithm is particularly efficient when there not "too many cancellations" of coefficients in the product. In this work we give a clean, nearly optimal algorithm for the sparse polynomial multiplication problem.Comment: Accepted to IEEE Transactions on Information Theor

    Fast Exact Bayesian Inference for Sparse Signals in the Normal Sequence Model

    Full text link
    We consider exact algorithms for Bayesian inference with model selection priors (including spike-and-slab priors) in the sparse normal sequence model. Because the best existing exact algorithm becomes numerically unstable for sample sizes over n=500, there has been much attention for alternative approaches like approximate algorithms (Gibbs sampling, variational Bayes, etc.), shrinkage priors (e.g. the Horseshoe prior and the Spike-and-Slab LASSO) or empirical Bayesian methods. However, by introducing algorithmic ideas from online sequential prediction, we show that exact calculations are feasible for much larger sample sizes: for general model selection priors we reach n=25000, and for certain spike-and-slab priors we can easily reach n=100000. We further prove a de Finetti-like result for finite sample sizes that characterizes exactly which model selection priors can be expressed as spike-and-slab priors. The computational speed and numerical accuracy of the proposed methods are demonstrated in experiments on simulated data, on a differential gene expression data set, and to compare the effect of multiple hyper-parameter settings in the beta-binomial prior. In our experimental evaluation we compute guaranteed bounds on the numerical accuracy of all new algorithms, which shows that the proposed methods are numerically reliable whereas an alternative based on long division is not

    Interactive certificate for the verification of Wiedemann's Krylov sequence: application to the certification of the determinant, the minimal and the characteristic polynomials of sparse matrices

    Get PDF
    Certificates to a linear algebra computation are additional data structures for each output, which can be used by a-possibly randomized- verification algorithm that proves the correctness of each output. Wiede-mann's algorithm projects the Krylov sequence obtained by repeatedly multiplying a vector by a matrix to obtain a linearly recurrent sequence. The minimal polynomial of this sequence divides the minimal polynomial of the matrix. For instance, if the n×nn\times n input matrix is sparse with n 1+o(1) non-zero entries, the computation of the sequence is quadratic in the dimension of the matrix while the computation of the minimal polynomial is n 1+o(1), once that projected Krylov sequence is obtained. In this paper we give algorithms that compute certificates for the Krylov sequence of sparse or structured n×nn\times n matrices over an abstract field, whose Monte Carlo verification complexity can be made essentially linear. As an application this gives certificates for the determinant, the minimal and characteristic polynomials of sparse or structured matrices at the same cost
    • …
    corecore