140,730 research outputs found

    A Comprehensive Approach for Sparse Principle Component Analysis using Regularized Singular Value Decomposition

    Get PDF
    Principle component analysis (PCA) has been a widely used tool for statistics and data analysis for many years. A good result of PCA should be both interpretable and accurate. However, neither interpretability nor accuracy could be achieved well in “big data” scenarios where there are large numbers of original variables. Therefore people developed sparse PCA, in which obtained principle components (PCs) are linear combinations of a limited number of original variables, which yields good interpretability. In addition, some theoretical results showed that, when the genuine model is sparse, PCs obtained via sparse PCA instead of traditional PCA are consistent estimators. These aspects have made sparse PCA a hot research topic in recent years. In this dissertation, we developed a comprehensive and systematic way for doing sparse PCA by using an SVD-based approach. In detail, we proposed the formulation and algorithm and showed its consistency and convergence. We even showed convergence to global optima using a limited number of trials, which is a breakthrough in sparse PCA area. In addition, to guarantee orthogonality or uncorrelatedness when multiple PCs are extracted, we developed a method for sparse PCA with orthogonal constraint, proposed its algorithm, and showed the convergence. In addition, to deal with missing values in the design matrix which often happens in reality, we developed a method for sparse PCA with missing values, proposed its algorithm, and showed the convergence. Moreover, to provide a good way of selecting tuning parameter in these formulations, we designed an entry-wise cross validation method based on sparse PCA with missing values. All these contributions and breakthroughs make our results practically useful and theoretically complete. Simulation study and real world data analysis are also provided, which showed that our method has competing results with others in “without missing” cases, and good results in “with missing” cases in which currently we are the only practical method

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    Probabilistic Modeling Paradigms for Audio Source Separation

    Get PDF
    This is the author's final version of the article, first published as E. Vincent, M. G. Jafari, S. A. Abdallah, M. D. Plumbley, M. E. Davies. Probabilistic Modeling Paradigms for Audio Source Separation. In W. Wang (Ed), Machine Audition: Principles, Algorithms and Systems. Chapter 7, pp. 162-185. IGI Global, 2011. ISBN 978-1-61520-919-4. DOI: 10.4018/978-1-61520-919-4.ch007file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04file: VincentJafariAbdallahPD11-probabilistic.pdf:v\VincentJafariAbdallahPD11-probabilistic.pdf:PDF owner: markp timestamp: 2011.02.04Most sound scenes result from the superposition of several sources, which can be separately perceived and analyzed by human listeners. Source separation aims to provide machine listeners with similar skills by extracting the sounds of individual sources from a given scene. Existing separation systems operate either by emulating the human auditory system or by inferring the parameters of probabilistic sound models. In this chapter, the authors focus on the latter approach and provide a joint overview of established and recent models, including independent component analysis, local time-frequency models and spectral template-based models. They show that most models are instances of one of the following two general paradigms: linear modeling or variance modeling. They compare the merits of either paradigm and report objective performance figures. They also,conclude by discussing promising combinations of probabilistic priors and inference algorithms that could form the basis of future state-of-the-art systems

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions

    Get PDF
    We develop a robust uncertainty principle for finite signals in C^N which states that for almost all subsets T,W of {0,...,N-1} such that |T|+|W| ~ (log N)^(-1/2) N, there is no sigal f supported on T whose discrete Fourier transform is supported on W. In fact, we can make the above uncertainty principle quantitative in the sense that if f is supported on T, then only a small percentage of the energy (less than half, say) of its Fourier transform is concentrated on W. As an application of this robust uncertainty principle (QRUP), we consider the problem of decomposing a signal into a sparse superposition of spikes and complex sinusoids. We show that if a generic signal f has a decomposition using spike and frequency locations in T and W respectively, and obeying |T| + |W| <= C (\log N)^{-1/2} N, then this is the unique sparsest possible decomposition (all other decompositions have more non-zero terms). In addition, if |T| + |W| <= C (\log N)^{-1} N, then this sparsest decomposition can be found by solving a convex optimization problem.Comment: 25 pages, 9 figure
    • …
    corecore