190 research outputs found

    A Constrained EM Algorithm for Independent Component Analysis

    Get PDF
    We introduce a novel way of performing independent component analysis using a constrained version of the expectation-maximization (EM) algorithm. The source distributions are modeled as D one-dimensional mixtures of gaussians. The observed data are modeled as linear mixtures of the sources with additive, isotropic noise. This generative model is fit to the data using constrained EM. The simpler “soft-switching” approach is introduced, which uses only one parameter to decide on the sub- or supergaussian nature of the sources. We explain how our approach relates to independent factor analysis

    Learning incoherent dictionaries for sparse approximation using iterative projections and rotations

    Get PDF
    This work was supported by the Queen Mary University of London School Studentship, the EU FET-Open project FP7- ICT-225913-SMALL. Sparse Models, Algorithms and Learning for Large-scale data and a Leadership Fellowship from the UK Engineering and Physical Sciences Research Council (EPSRC)

    Fast Dictionary Learning for Sparse Representations of Speech Signals

    Get PDF
    © 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Published version: IEEE Journal of Selected Topics in Signal Processing 5(5): 1025-1031, Sep 2011. DOI: 10.1109/JSTSP.2011.2157892

    Online Embedding Compression for Text Classification using Low Rank Matrix Factorization

    Full text link
    Deep learning models have become state of the art for natural language processing (NLP) tasks, however deploying these models in production system poses significant memory constraints. Existing compression methods are either lossy or introduce significant latency. We propose a compression method that leverages low rank matrix factorization during training,to compress the word embedding layer which represents the size bottleneck for most NLP models. Our models are trained, compressed and then further re-trained on the downstream task to recover accuracy while maintaining the reduced size. Empirically, we show that the proposed method can achieve 90% compression with minimal impact in accuracy for sentence classification tasks, and outperforms alternative methods like fixed-point quantization or offline word embedding compression. We also analyze the inference time and storage space for our method through FLOP calculations, showing that we can compress DNN models by a configurable ratio and regain accuracy loss without introducing additional latency compared to fixed point quantization. Finally, we introduce a novel learning rate schedule, the Cyclically Annealed Learning Rate (CALR), which we empirically demonstrate to outperform other popular adaptive learning rate algorithms on a sentence classification benchmark.Comment: Accepted in Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019
    corecore