1,675 research outputs found
Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations
Fast linear transforms are ubiquitous in machine learning, including the
discrete Fourier transform, discrete cosine transform, and other structured
transformations such as convolutions. All of these transforms can be
represented by dense matrix-vector multiplication, yet each has a specialized
and highly efficient (subquadratic) algorithm. We ask to what extent
hand-crafting these algorithms and implementations is necessary, what
structural priors they encode, and how much knowledge is required to
automatically learn a fast algorithm for a provided structured transform.
Motivated by a characterization of fast matrix-vector multiplication as
products of sparse matrices, we introduce a parameterization of
divide-and-conquer methods that is capable of representing a large class of
transforms. This generic formulation can automatically learn an efficient
algorithm for many important transforms; for example, it recovers the Cooley-Tukey FFT algorithm to machine precision, for dimensions up to
. Furthermore, our method can be incorporated as a lightweight
replacement of generic matrices in machine learning pipelines to learn
efficient and compressible transformations. On a standard task of compressing a
single hidden-layer network, our method exceeds the classification accuracy of
unconstrained matrices on CIFAR-10 by 3.9 points---the first time a structured
approach has done so---with 4X faster inference speed and 40X fewer parameters
Scaling Analysis of Affinity Propagation
We analyze and exploit some scaling properties of the Affinity Propagation
(AP) clustering algorithm proposed by Frey and Dueck (2007). First we observe
that a divide and conquer strategy, used on a large data set hierarchically
reduces the complexity to , for a
data-set of size and a depth of the hierarchical strategy. For a
data-set embedded in a -dimensional space, we show that this is obtained
without notably damaging the precision except in dimension . In fact, for
larger than 2 the relative loss in precision scales like
. Finally, under some conditions we observe that there is a
value of the penalty coefficient, a free parameter used to fix the number
of clusters, which separates a fragmentation phase (for ) from a
coalescent one (for ) of the underlying hidden cluster structure. At
this precise point holds a self-similarity property which can be exploited by
the hierarchical strategy to actually locate its position. From this
observation, a strategy based on \AP can be defined to find out how many
clusters are present in a given dataset.Comment: 28 pages, 14 figures, Inria research repor
A Lex-BFS-based recognition algorithm for Robinsonian matrices
Robinsonian matrices arise in the classical seriation problem and play an important role
in many applications where unsorted similarity (or dissimilarity) information must be re-
ordered. We present a new polynomial time algorithm to recognize Robinsonian matrices
based on a new characterization of Robinsonian matrices in terms of straight enumerations
of unit interval graphs. The algorithm is simple and is based essentially on lexicographic
breadth-first search (Lex-BFS), using a divide-and-conquer strategy. When applied to a non-
negative symmetric n × n matrix with m nonzero entries and given as a weighted adjacency
list, it runs in O(d(n + m)) time, where d is the depth of the recursion tree, which is at most
the number of distinct nonzero entries of A
Parallel Selected Inversion for Space-Time Gaussian Markov Random Fields
Performing a Bayesian inference on large spatio-temporal models requires
extracting inverse elements of large sparse precision matrices for marginal
variances. Although direct matrix factorizations can be used for the inversion,
such methods fail to scale well for distributed problems when run on large
computing clusters. On the contrary, Krylov subspace methods for the selected
inversion have been gaining traction. We propose a parallel hybrid approach
based on domain decomposition, which extends the Rao-Blackwellized Monte Carlo
estimator for distributed precision matrices. Our approach exploits the
strength of Krylov subspace methods as global solvers and efficiency of direct
factorizations as base case solvers to compute the marginal variances using a
divide-and-conquer strategy. By introducing subdomain overlaps, one can achieve
a greater accuracy at an increased computational effort with little to no
additional communication. We demonstrate the speed improvements on both
simulated models and a massive US daily temperature data.Comment: 17 pages, 7 figure
- …