852 research outputs found

    Exact Dimensionality Selection for Bayesian PCA

    Get PDF
    We present a Bayesian model selection approach to estimate the intrinsic dimensionality of a high-dimensional dataset. To this end, we introduce a novel formulation of the probabilisitic principal component analysis model based on a normal-gamma prior distribution. In this context, we exhibit a closed-form expression of the marginal likelihood which allows to infer an optimal number of components. We also propose a heuristic based on the expected shape of the marginal likelihood curve in order to choose the hyperparameters. In non-asymptotic frameworks, we show on simulated data that this exact dimensionality selection approach is competitive with both Bayesian and frequentist state-of-the-art methods

    A D.C. Programming Approach to the Sparse Generalized Eigenvalue Problem

    Full text link
    In this paper, we consider the sparse eigenvalue problem wherein the goal is to obtain a sparse solution to the generalized eigenvalue problem. We achieve this by constraining the cardinality of the solution to the generalized eigenvalue problem and obtain sparse principal component analysis (PCA), sparse canonical correlation analysis (CCA) and sparse Fisher discriminant analysis (FDA) as special cases. Unlike the â„“1\ell_1-norm approximation to the cardinality constraint, which previous methods have used in the context of sparse PCA, we propose a tighter approximation that is related to the negative log-likelihood of a Student's t-distribution. The problem is then framed as a d.c. (difference of convex functions) program and is solved as a sequence of convex programs by invoking the majorization-minimization method. The resulting algorithm is proved to exhibit \emph{global convergence} behavior, i.e., for any random initialization, the sequence (subsequence) of iterates generated by the algorithm converges to a stationary point of the d.c. program. The performance of the algorithm is empirically demonstrated on both sparse PCA (finding few relevant genes that explain as much variance as possible in a high-dimensional gene dataset) and sparse CCA (cross-language document retrieval and vocabulary selection for music retrieval) applications.Comment: 40 page

    EEG Based Inference of Spatio-Temporal Brain Dynamics

    Get PDF

    Large-Scale Quasi-Bayesian Inference with Spike-and-Slab Priors

    Full text link
    This dissertation studies a general framework using spike-and-slab prior distributions to facilitate the development of high-dimensional Bayesian inference. Our framework allows inference with a general quasi-likelihood function to address scenarios where likelihood based inference are infeasible or the underlying optimization problems are not the same as the data generating mechanisms. We show that highly efficient and scalable Markov Chain Monte Carlo (MCMC) algorithms can be easily constructed to sample from the resulting quasi-posterior distributions. We study the large scale behavior of the resulting quasi-posterior distributions as the dimension of the parameter space grows, and we establish several convergence results. In large-scale applications where computational speed is important, variational approximation methods are often used to approximate posterior distributions. We show that the contraction behaviors of the quasi-posterior distributions can be exploited to provide theoretical guarantees for their variational approximations. We illustrate the theory with several examples. Finally we develop a quasi-likelihood based algorithm for estimation of Ising/Potts models that incorporates inbuilt mechanism for parallel computation. We illustrate the usability of the method by analyzing 16 Personality Factors data under the setup of Five-level Potts Model. The data analysis recovers known clusters of personality traits and also indicates plausible novel clusters.PHDStatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163007/1/anwebha_1.pd

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Deep-Learning Based Multiple-Model Bayesian Architecture for Spacecraft Fault Estimation

    Get PDF
    This thesis presents recent findings regarding the performance of an intelligent architecture designed for spacecraft fault estimation. The approach incorporates a collection of systematically organized autoencoders within a Bayesian framework, enabling early detection and classification of various spacecraft faults such as reaction-wheel damage, sensor faults, and power system degradation. To assess the effectiveness of this architecture, a range of performance metrics is employed. Through extensive numerical simulations and in-lab experimental testing utilizing a dedicated spacecraft testbed, the capabilities and accuracy of the proposed intelligent architecture are analyzed. These evaluations provide valuable insights into the architecture\u27s ability to detect and classify different types of faults in a spacecraft system. The study has successfully implemented an intelligent architecture for detecting and classifying faults in spacecraft. The architecture was analyzed through numerical simulations and experimental tests, demonstrating enhanced early detection capabilities. The incorporation of autoencoders and Bayesian methods proved to be a powerful combination, allowing the architecture to effectively capture and learn from complex spacecraft system dynamics and detect various types of faults. This research presents an advanced and reliable approach to early fault detection and classification in spacecraft systems, highlighting the potential of the intelligent architecture and paving the way for future developments in the field
    • …
    corecore