131 research outputs found

    Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations

    Get PDF
    Fast linear transforms are ubiquitous in machine learning, including the discrete Fourier transform, discrete cosine transform, and other structured transformations such as convolutions. All of these transforms can be represented by dense matrix-vector multiplication, yet each has a specialized and highly efficient (subquadratic) algorithm. We ask to what extent hand-crafting these algorithms and implementations is necessary, what structural priors they encode, and how much knowledge is required to automatically learn a fast algorithm for a provided structured transform. Motivated by a characterization of fast matrix-vector multiplication as products of sparse matrices, we introduce a parameterization of divide-and-conquer methods that is capable of representing a large class of transforms. This generic formulation can automatically learn an efficient algorithm for many important transforms; for example, it recovers the O(Nlog⁥N)O(N \log N) Cooley-Tukey FFT algorithm to machine precision, for dimensions NN up to 10241024. Furthermore, our method can be incorporated as a lightweight replacement of generic matrices in machine learning pipelines to learn efficient and compressible transformations. On a standard task of compressing a single hidden-layer network, our method exceeds the classification accuracy of unconstrained matrices on CIFAR-10 by 3.9 points---the first time a structured approach has done so---with 4X faster inference speed and 40X fewer parameters

    Reducing Memory Requirements for the IPU using Butterfly Factorizations

    Full text link
    High Performance Computing (HPC) benefits from different improvements during last decades, specially in terms of hardware platforms to provide more processing power while maintaining the power consumption at a reasonable level. The Intelligence Processing Unit (IPU) is a new type of massively parallel processor, designed to speedup parallel computations with huge number of processing cores and on-chip memory components connected with high-speed fabrics. IPUs mainly target machine learning applications, however, due to the architectural differences between GPUs and IPUs, especially significantly less memory capacity on an IPU, methods for reducing model size by sparsification have to be considered. Butterfly factorizations are well-known replacements for fully-connected and convolutional layers. In this paper, we examine how butterfly structures can be implemented on an IPU and study their behavior and performance compared to a GPU. Experimental results indicate that these methods can provide 98.5% compression ratio to decrease the immense need for memory, the IPU implementation can benefit from 1.3x and 1.6x performance improvement for butterfly and pixelated butterfly, respectively. We also reach to 1.62x training time speedup on a real-word dataset such as CIFAR10

    Flexible Multi-layer Sparse Approximations of Matrices and Applications

    Get PDF
    The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors. This paper introduces an algorithm aimed at reducing the complexity of applying linear operators in high dimension by approximately factorizing the corresponding matrix into few sparse factors. The approach relies on recent advances in non-convex optimization. It is first explained and analyzed in details and then demonstrated experimentally on various problems including dictionary learning for image denoising, and the approximation of large matrices arising in inverse problems

    Efficient Identification of Butterfly Sparse Matrix Factorizations

    Get PDF
    International audienceFast transforms correspond to factorizations of the form Z=X(1)
X(J)\mathbf{Z} = \mathbf{X}^{(1)} \ldots \mathbf{X}^{(J)}, where each factor X(ℓ) \mathbf{X}^{(\ell)} is sparse and possibly structured. This paper investigates essential uniqueness of such factorizations, i.e., uniqueness up to unavoidable scaling ambiguities. Our main contribution is to prove that any N×NN \times N matrix having the so-called butterfly structure admits an essentially unique factorization into JJ butterfly factors (where N=2JN = 2^{J}), and that the factors can be recovered by a hierarchical factorization method, which consists in recursively factorizing the considered matrix into two factors. This hierarchical identifiability property relies on a simple identifiability condition in the two-layer and fixed-support setting. This approach contrasts with existing ones that fit the product of butterfly factors to a given matrix via gradient descent. The proposed method can be applied in particular to retrieve the factorization of the Hadamard or the discrete Fourier transform matrices of size N=2JN=2^J. Computing such factorizations costs O(N2)\mathcal{O}(N^{2}), which is of the order of dense matrix-vector multiplication, while the obtained factorizations enable fast O(Nlog⁡N)\mathcal{O}(N \log N) matrix-vector multiplications and have the potential to be applied to compress deep neural networks

    Learning computationally efficient dictionaries and their implementation as fast transforms

    Get PDF
    Dictionary learning is a branch of signal processing and machine learning that aims at finding a frame (called dictionary) in which some training data admits a sparse representation. The sparser the representation, the better the dictionary. The resulting dictionary is in general a dense matrix, and its manipulation can be computationally costly both at the learning stage and later in the usage of this dictionary, for tasks such as sparse coding. Dictionary learning is thus limited to relatively small-scale problems. In this paper, inspired by usual fast transforms, we consider a general dictionary structure that allows cheaper manipulation, and propose an algorithm to learn such dictionaries --and their fast implementation-- over training data. The approach is demonstrated experimentally with the factorization of the Hadamard matrix and with synthetic dictionary learning experiments

    Efficient Identification of Butterfly Sparse Matrix Factorizations

    Full text link
    Fast transforms correspond to factorizations of the form Z=X(1)
X(J)\mathbf{Z} = \mathbf{X}^{(1)} \ldots \mathbf{X}^{(J)}, where each factor X(ℓ) \mathbf{X}^{(\ell)} is sparse and possibly structured. This paper investigates essential uniqueness of such factorizations, i.e., uniqueness up to unavoidable scaling ambiguities. Our main contribution is to prove that any N×NN \times N matrix having the so-called butterfly structure admits an essentially unique factorization into JJ butterfly factors (where N=2JN = 2^{J}), and that the factors can be recovered by a hierarchical factorization method, which consists in recursively factorizing the considered matrix into two factors. This hierarchical identifiability property relies on a simple identifiability condition in the two-layer and fixed-support setting. This approach contrasts with existing ones that fit the product of butterfly factors to a given matrix via gradient descent. The proposed method can be applied in particular to retrieve the factorization of the Hadamard or the discrete Fourier transform matrices of size N=2JN=2^J. Computing such factorizations costs O(N2)\mathcal{O}(N^{2}), which is of the order of dense matrix-vector multiplication, while the obtained factorizations enable fast O(Nlog⁡N)\mathcal{O}(N \log N) matrix-vector multiplications and have the potential to be applied to compress deep neural networks
    • 

    corecore