87 research outputs found

    Single-channel source separation using non-negative matrix factorization

    Get PDF

    Functional Principal Component Analysis for Discretely Observed Functional Data and Sparse Fisher?s Discriminant Analysis with Thresholded Linear Constraints

    Get PDF
    We propose a new method to perform functional principal component analysis (FPCA) for discretely observed functional data by solving successive optimization problems. The new framework can be applied to both regularly and irregularly observed data, and to both dense and sparse data. Our method does not require estimates of the individual sample functions or the covariance functions. Hence, it can be used to analyze functional data with multidimensional arguments (e.g. random surfaces). Furthermore, it can be applied to many processes and models with complicated or nonsmooth covariance functions. In our method, smoothness of eigenfunctions is controlled by directly imposing roughness penalties on eigenfunctions, which makes it more efficient and flexible to tune the smoothness. Efficient algorithms for solving the successive optimization problems are proposed. We provide the existence and characterization of the solutions to the successive optimization problems. The consistency of our method is also proved. Through simulations, we demonstrate that our method performs well in the cases with smooth samples curves, with discontinuous sample curves and nonsmooth covariance and with sample functions having two dimensional arguments (random surfaces), repectively. We apply our method to classification problems of retinal pigment epithelial cells in eyes of mice and to longitudinal CD4 counts data. In the second part of this dissertation, we propose a sparse Fisher’s discriminant analysis method with thresholded linear constraints. Various regularized linear discriminant analysis (LDA) methods have been proposed to address the problems of the LDA in high-dimensional settings. Asymptotic optimality has been established for some of these methods when there are only two classes. A difficulty in the asymptotic study for the multiclass classification is that for the two-class classification, the classification boundary is a hyperplane and an explicit formula for the classification error exists, however, in the case of multiclass, the boundary is usually complicated and no explicit formula for the error generally exists. Another difficulty in proving the asymptotic consistency and optimality for sparse Fisher’s discriminant analysis is that the covariance matrix is involved in the constraints of the optimization problems for high order components. It is not easy to estimate a general high-dimensional covariance matrix. Thus, we propose a sparse Fisher’s discriminant analysis method which avoids the estimation of the covariance matrix, provide asymptotic consistency results and the corresponding convergence rates for all components. To prove the asymptotic optimality, we provide an asymptotic upper bound for a general linear classification rule in the case of muticlass which is applied to our method to obtain the asymptotic optimality and the corresponding convergence rate. In the special case of two classes, our method achieves the same as or better convergence rates compared to the existing method. The proposed method is applied to multivariate functional data with wavelet transformations

    Nonparametric Estimation of Distributional Functionals and Applications.

    Full text link
    Distributional functionals are integrals of functionals of probability densities and include functionals such as information divergence, mutual information, and entropy. Distributional functionals have many applications in the fields of information theory, statistics, signal processing, and machine learning. Many existing nonparametric distributional functional estimators have either unknown convergence rates or are difficult to implement. In this thesis, we consider the problem of nonparametrically estimating functionals of distributions when only a finite population of independent and identically distributed samples are available from each of the unknown, smooth, d-dimensional distributions. We derive mean squared error (MSE) convergence rates for leave-one-out kernel density plug-in estimators and k-nearest neighbor estimators of these functionals. We then extend the theory of optimally weighted ensemble estimation to obtain estimators that achieve the parametric MSE convergence rate when the densities are sufficiently smooth. These estimators are simple to implement and do not require knowledge of the densities’ support set, in contrast with many competing estimators. The asymptotic distribution of these estimators is also derived. The utility of these estimators is demonstrated through their application to sunspot image data and neural data measured from epilepsy patients. Sunspot images are clustered by estimating the divergence between the underlying probability distributions of image pixel patches. The problem of overfitting is also addressed in both applications by performing dimensionality reduction via intrinsic dimension estimation and by benchmarking classification via Bayes error estimationPhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133394/1/krmoon_1.pd

    Non-negative Matrix factorization:Theory and Methods

    Get PDF

    Generative-Discriminative Low Rank Decomposition for Medical Imaging Applications

    Get PDF
    In this thesis, we propose a method that can be used to extract biomarkers from medical images toward early diagnosis of abnormalities. Surge of demand for biomarkers and availability of medical images in the recent years call for accurate, repeatable, and interpretable approaches for extracting meaningful imaging features. However, extracting such information from medical images is a challenging task because the number of pixels (voxels) in a typical image is in order of millions while even a large sample-size in medical image dataset does not usually exceed a few hundred. Nevertheless, depending on the nature of an abnormality, only a parsimonious subset of voxels is typically relevant to the disease; therefore various notions of sparsity are exploited in this thesis to improve the generalization performance of the prediction task. We propose a novel discriminative dimensionality reduction method that yields good classification performance on various datasets without compromising the clinical interpretability of the results. This is achieved by combining the modelling strength of generative learning framework and the classification performance of discriminative learning paradigm. Clinical interpretability can be viewed as an additional measure of evaluation and is also helpful in designing methods that account for the clinical prior such as association of certain areas in a brain to a particular cognitive task or connectivity of some brain regions via neural fibres. We formulate our method as a large-scale optimization problem to solve a constrained matrix factorization. Finding an optimal solution of the large-scale matrix factorization renders off-the-shelf solver computationally prohibitive; therefore, we designed an efficient algorithm based on the proximal method to address the computational bottle-neck of the optimization problem. Our formulation is readily extended for different scenarios such as cases where a large cohort of subjects has uncertain or no class labels (semi-supervised learning) or a case where each subject has a battery of imaging channels (multi-channel), \etc. We show that by using various notions of sparsity as feasible sets of the optimization problem, we can encode different forms of prior knowledge ranging from brain parcellation to brain connectivity

    Discriminant feature pursuit: from statistical learning to informative learning.

    Get PDF
    Lin Dahua.Thesis (M.Phil.)--Chinese University of Hong Kong, 2006.Includes bibliographical references (leaves 233-250).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- The Problem We are Facing --- p.1Chapter 1.2 --- Generative vs. Discriminative Models --- p.2Chapter 1.3 --- Statistical Feature Extraction: Success and Challenge --- p.3Chapter 1.4 --- Overview of Our Works --- p.5Chapter 1.4.1 --- New Linear Discriminant Methods: Generalized LDA Formulation and Performance-Driven Sub space Learning --- p.5Chapter 1.4.2 --- Coupled Learning Models: Coupled Space Learning and Inter Modality Recognition --- p.6Chapter 1.4.3 --- Informative Learning Approaches: Conditional Infomax Learning and Information Chan- nel Model --- p.6Chapter 1.5 --- Organization of the Thesis --- p.8Chapter I --- History and Background --- p.10Chapter 2 --- Statistical Pattern Recognition --- p.11Chapter 2.1 --- Patterns and Classifiers --- p.11Chapter 2.2 --- Bayes Theory --- p.12Chapter 2.3 --- Statistical Modeling --- p.14Chapter 2.3.1 --- Maximum Likelihood Estimation --- p.14Chapter 2.3.2 --- Gaussian Model --- p.15Chapter 2.3.3 --- Expectation-Maximization --- p.17Chapter 2.3.4 --- Finite Mixture Model --- p.18Chapter 2.3.5 --- A Nonparametric Technique: Parzen Windows --- p.21Chapter 3 --- Statistical Learning Theory --- p.24Chapter 3.1 --- Formulation of Learning Model --- p.24Chapter 3.1.1 --- Learning: Functional Estimation Model --- p.24Chapter 3.1.2 --- Representative Learning Problems --- p.25Chapter 3.1.3 --- Empirical Risk Minimization --- p.26Chapter 3.2 --- Consistency and Convergence of Learning --- p.27Chapter 3.2.1 --- Concept of Consistency --- p.27Chapter 3.2.2 --- The Key Theorem of Learning Theory --- p.28Chapter 3.2.3 --- VC Entropy --- p.29Chapter 3.2.4 --- Bounds on Convergence --- p.30Chapter 3.2.5 --- VC Dimension --- p.35Chapter 4 --- History of Statistical Feature Extraction --- p.38Chapter 4.1 --- Linear Feature Extraction --- p.38Chapter 4.1.1 --- Principal Component Analysis (PCA) --- p.38Chapter 4.1.2 --- Linear Discriminant Analysis (LDA) --- p.41Chapter 4.1.3 --- Other Linear Feature Extraction Methods --- p.46Chapter 4.1.4 --- Comparison of Different Methods --- p.48Chapter 4.2 --- Enhanced Models --- p.49Chapter 4.2.1 --- Stochastic Discrimination and Random Subspace --- p.49Chapter 4.2.2 --- Hierarchical Feature Extraction --- p.51Chapter 4.2.3 --- Multilinear Analysis and Tensor-based Representation --- p.52Chapter 4.3 --- Nonlinear Feature Extraction --- p.54Chapter 4.3.1 --- Kernelization --- p.54Chapter 4.3.2 --- Dimension reduction by Manifold Embedding --- p.56Chapter 5 --- Related Works in Feature Extraction --- p.59Chapter 5.1 --- Dimension Reduction --- p.59Chapter 5.1.1 --- Feature Selection --- p.60Chapter 5.1.2 --- Feature Extraction --- p.60Chapter 5.2 --- Kernel Learning --- p.61Chapter 5.2.1 --- Basic Concepts of Kernel --- p.61Chapter 5.2.2 --- The Reproducing Kernel Map --- p.62Chapter 5.2.3 --- The Mercer Kernel Map --- p.64Chapter 5.2.4 --- The Empirical Kernel Map --- p.65Chapter 5.2.5 --- Kernel Trick and Kernelized Feature Extraction --- p.66Chapter 5.3 --- Subspace Analysis --- p.68Chapter 5.3.1 --- Basis and Subspace --- p.68Chapter 5.3.2 --- Orthogonal Projection --- p.69Chapter 5.3.3 --- Orthonormal Basis --- p.70Chapter 5.3.4 --- Subspace Decomposition --- p.70Chapter 5.4 --- Principal Component Analysis --- p.73Chapter 5.4.1 --- PCA Formulation --- p.73Chapter 5.4.2 --- Solution to PCA --- p.75Chapter 5.4.3 --- Energy Structure of PCA --- p.76Chapter 5.4.4 --- Probabilistic Principal Component Analysis --- p.78Chapter 5.4.5 --- Kernel Principal Component Analysis --- p.81Chapter 5.5 --- Independent Component Analysis --- p.83Chapter 5.5.1 --- ICA Formulation --- p.83Chapter 5.5.2 --- Measurement of Statistical Independence --- p.84Chapter 5.6 --- Linear Discriminant Analysis --- p.85Chapter 5.6.1 --- Fisher's Linear Discriminant Analysis --- p.85Chapter 5.6.2 --- Improved Algorithms for Small Sample Size Problem . --- p.89Chapter 5.6.3 --- Kernel Discriminant Analysis --- p.92Chapter II --- Improvement in Linear Discriminant Analysis --- p.100Chapter 6 --- Generalized LDA --- p.101Chapter 6.1 --- Regularized LDA --- p.101Chapter 6.1.1 --- Generalized LDA Implementation Procedure --- p.101Chapter 6.1.2 --- Optimal Nonsingular Approximation --- p.103Chapter 6.1.3 --- Regularized LDA algorithm --- p.104Chapter 6.2 --- A Statistical View: When is LDA optimal? --- p.105Chapter 6.2.1 --- Two-class Gaussian Case --- p.106Chapter 6.2.2 --- Multi-class Cases --- p.107Chapter 6.3 --- Generalized LDA Formulation --- p.108Chapter 6.3.1 --- Mathematical Preparation --- p.108Chapter 6.3.2 --- Generalized Formulation --- p.110Chapter 7 --- Dynamic Feedback Generalized LDA --- p.112Chapter 7.1 --- Basic Principle --- p.112Chapter 7.2 --- Dynamic Feedback Framework --- p.113Chapter 7.2.1 --- Initialization: K-Nearest Construction --- p.113Chapter 7.2.2 --- Dynamic Procedure --- p.115Chapter 7.3 --- Experiments --- p.115Chapter 7.3.1 --- Performance in Training Stage --- p.116Chapter 7.3.2 --- Performance on Testing set --- p.118Chapter 8 --- Performance-Driven Subspace Learning --- p.119Chapter 8.1 --- Motivation and Principle --- p.119Chapter 8.2 --- Performance-Based Criteria --- p.121Chapter 8.2.1 --- The Verification Problem and Generalized Average Margin --- p.122Chapter 8.2.2 --- Performance Driven Criteria based on Generalized Average Margin --- p.123Chapter 8.3 --- Optimal Subspace Pursuit --- p.125Chapter 8.3.1 --- Optimal threshold --- p.125Chapter 8.3.2 --- Optimal projection matrix --- p.125Chapter 8.3.3 --- Overall procedure --- p.129Chapter 8.3.4 --- Discussion of the Algorithm --- p.129Chapter 8.4 --- Optimal Classifier Fusion --- p.130Chapter 8.5 --- Experiments --- p.131Chapter 8.5.1 --- Performance Measurement --- p.131Chapter 8.5.2 --- Experiment Setting --- p.131Chapter 8.5.3 --- Experiment Results --- p.133Chapter 8.5.4 --- Discussion --- p.139Chapter III --- Coupled Learning of Feature Transforms --- p.140Chapter 9 --- Coupled Space Learning --- p.141Chapter 9.1 --- Introduction --- p.142Chapter 9.1.1 --- What is Image Style Transform --- p.142Chapter 9.1.2 --- Overview of our Framework --- p.143Chapter 9.2 --- Coupled Space Learning --- p.143Chapter 9.2.1 --- Framework of Coupled Modelling --- p.143Chapter 9.2.2 --- Correlative Component Analysis --- p.145Chapter 9.2.3 --- Coupled Bidirectional Transform --- p.148Chapter 9.2.4 --- Procedure of Coupled Space Learning --- p.151Chapter 9.3 --- Generalization to Mixture Model --- p.152Chapter 9.3.1 --- Coupled Gaussian Mixture Model --- p.152Chapter 9.3.2 --- Optimization by EM Algorithm --- p.152Chapter 9.4 --- Integrated Framework for Image Style Transform --- p.154Chapter 9.5 --- Experiments --- p.156Chapter 9.5.1 --- Face Super-resolution --- p.156Chapter 9.5.2 --- Portrait Style Transforms --- p.157Chapter 10 --- Inter-Modality Recognition --- p.162Chapter 10.1 --- Introduction to the Inter-Modality Recognition Problem . . . --- p.163Chapter 10.1.1 --- What is Inter-Modality Recognition --- p.163Chapter 10.1.2 --- Overview of Our Feature Extraction Framework . . . . --- p.163Chapter 10.2 --- Common Discriminant Feature Extraction --- p.165Chapter 10.2.1 --- Formulation of the Learning Problem --- p.165Chapter 10.2.2 --- Matrix-Form of the Objective --- p.168Chapter 10.2.3 --- Solving the Linear Transforms --- p.169Chapter 10.3 --- Kernelized Common Discriminant Feature Extraction --- p.170Chapter 10.4 --- Multi-Mode Framework --- p.172Chapter 10.4.1 --- Multi-Mode Formulation --- p.172Chapter 10.4.2 --- Optimization Scheme --- p.174Chapter 10.5 --- Experiments --- p.176Chapter 10.5.1 --- Experiment Settings --- p.176Chapter 10.5.2 --- Experiment Results --- p.177Chapter IV --- A New Perspective: Informative Learning --- p.180Chapter 11 --- Toward Information Theory --- p.181Chapter 11.1 --- Entropy and Mutual Information --- p.181Chapter 11.1.1 --- Entropy --- p.182Chapter 11.1.2 --- Relative Entropy (Kullback Leibler Divergence) --- p.184Chapter 11.2 --- Mutual Information --- p.184Chapter 11.2.1 --- Definition of Mutual Information --- p.184Chapter 11.2.2 --- Chain rules --- p.186Chapter 11.2.3 --- Information in Data Processing --- p.188Chapter 11.3 --- Differential Entropy --- p.189Chapter 11.3.1 --- Differential Entropy of Continuous Random Variable . --- p.189Chapter 11.3.2 --- Mutual Information of Continuous Random Variable . --- p.190Chapter 12 --- Conditional Infomax Learning --- p.191Chapter 12.1 --- An Overview --- p.192Chapter 12.2 --- Conditional Informative Feature Extraction --- p.193Chapter 12.2.1 --- Problem Formulation and Features --- p.193Chapter 12.2.2 --- The Information Maximization Principle --- p.194Chapter 12.2.3 --- The Information Decomposition and the Conditional Objective --- p.195Chapter 12.3 --- The Efficient Optimization --- p.197Chapter 12.3.1 --- Discrete Approximation Based on AEP --- p.197Chapter 12.3.2 --- Analysis of Terms and Their Derivatives --- p.198Chapter 12.3.3 --- Local Active Region Method --- p.200Chapter 12.4 --- Bayesian Feature Fusion with Sparse Prior --- p.201Chapter 12.5 --- The Integrated Framework for Feature Learning --- p.202Chapter 12.6 --- Experiments --- p.203Chapter 12.6.1 --- A Toy Problem --- p.203Chapter 12.6.2 --- Face Recognition --- p.204Chapter 13 --- Channel-based Maximum Effective Information --- p.209Chapter 13.1 --- Motivation and Overview --- p.209Chapter 13.2 --- Maximizing Effective Information --- p.211Chapter 13.2.1 --- Relation between Mutual Information and Classification --- p.211Chapter 13.2.2 --- Linear Projection and Metric --- p.212Chapter 13.2.3 --- Channel Model and Effective Information --- p.213Chapter 13.2.4 --- Parzen Window Approximation --- p.216Chapter 13.3 --- Parameter Optimization on Grassmann Manifold --- p.217Chapter 13.3.1 --- Grassmann Manifold --- p.217Chapter 13.3.2 --- Conjugate Gradient Optimization on Grassmann Manifold --- p.219Chapter 13.3.3 --- Computation of Gradient --- p.221Chapter 13.4 --- Experiments --- p.222Chapter 13.4.1 --- A Toy Problem --- p.222Chapter 13.4.2 --- Face Recognition --- p.223Chapter 14 --- Conclusion --- p.23
    • …
    corecore