1,675 research outputs found

    Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations

    Get PDF
    Fast linear transforms are ubiquitous in machine learning, including the discrete Fourier transform, discrete cosine transform, and other structured transformations such as convolutions. All of these transforms can be represented by dense matrix-vector multiplication, yet each has a specialized and highly efficient (subquadratic) algorithm. We ask to what extent hand-crafting these algorithms and implementations is necessary, what structural priors they encode, and how much knowledge is required to automatically learn a fast algorithm for a provided structured transform. Motivated by a characterization of fast matrix-vector multiplication as products of sparse matrices, we introduce a parameterization of divide-and-conquer methods that is capable of representing a large class of transforms. This generic formulation can automatically learn an efficient algorithm for many important transforms; for example, it recovers the O(NlogN)O(N \log N) Cooley-Tukey FFT algorithm to machine precision, for dimensions NN up to 10241024. Furthermore, our method can be incorporated as a lightweight replacement of generic matrices in machine learning pipelines to learn efficient and compressible transformations. On a standard task of compressing a single hidden-layer network, our method exceeds the classification accuracy of unconstrained matrices on CIFAR-10 by 3.9 points---the first time a structured approach has done so---with 4X faster inference speed and 40X fewer parameters

    Scaling Analysis of Affinity Propagation

    Get PDF
    We analyze and exploit some scaling properties of the Affinity Propagation (AP) clustering algorithm proposed by Frey and Dueck (2007). First we observe that a divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2){\cal O}(N^2) to O(N(h+2)/(h+1)){\cal O}(N^{(h+2)/(h+1)}), for a data-set of size NN and a depth hh of the hierarchical strategy. For a data-set embedded in a dd-dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2d=2. In fact, for dd larger than 2 the relative loss in precision scales like N(2d)/(h+1)dN^{(2-d)/(h+1)d}. Finally, under some conditions we observe that there is a value ss^* of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for s<ss<s^*) from a coalescent one (for s>ss>s^*) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position. From this observation, a strategy based on \AP can be defined to find out how many clusters are present in a given dataset.Comment: 28 pages, 14 figures, Inria research repor

    A Lex-BFS-based recognition algorithm for Robinsonian matrices

    Get PDF
    Robinsonian matrices arise in the classical seriation problem and play an important role in many applications where unsorted similarity (or dissimilarity) information must be re- ordered. We present a new polynomial time algorithm to recognize Robinsonian matrices based on a new characterization of Robinsonian matrices in terms of straight enumerations of unit interval graphs. The algorithm is simple and is based essentially on lexicographic breadth-first search (Lex-BFS), using a divide-and-conquer strategy. When applied to a non- negative symmetric n × n matrix with m nonzero entries and given as a weighted adjacency list, it runs in O(d(n + m)) time, where d is the depth of the recursion tree, which is at most the number of distinct nonzero entries of A

    Parallel Selected Inversion for Space-Time Gaussian Markov Random Fields

    Full text link
    Performing a Bayesian inference on large spatio-temporal models requires extracting inverse elements of large sparse precision matrices for marginal variances. Although direct matrix factorizations can be used for the inversion, such methods fail to scale well for distributed problems when run on large computing clusters. On the contrary, Krylov subspace methods for the selected inversion have been gaining traction. We propose a parallel hybrid approach based on domain decomposition, which extends the Rao-Blackwellized Monte Carlo estimator for distributed precision matrices. Our approach exploits the strength of Krylov subspace methods as global solvers and efficiency of direct factorizations as base case solvers to compute the marginal variances using a divide-and-conquer strategy. By introducing subdomain overlaps, one can achieve a greater accuracy at an increased computational effort with little to no additional communication. We demonstrate the speed improvements on both simulated models and a massive US daily temperature data.Comment: 17 pages, 7 figure
    corecore