79 research outputs found

    Sparse Cholesky covariance parametrization for recovering latent structure in ordered data

    Get PDF
    The sparse Cholesky parametrization of the inverse covariance matrix can be interpreted as a Gaussian Bayesian network; however its counterpart, the covariance Cholesky factor, has received, with few notable exceptions, little attention so far, despite having a natural interpretation as a hidden variable model for ordered signal data. To fill this gap, in this paper we focus on arbitrary zero patterns in the Cholesky factor of a covariance matrix. We discuss how these models can also be extended, in analogy with Gaussian Bayesian networks, to data where no apparent order is available. For the ordered scenario, we propose a novel estimation method that is based on matrix loss penalization, as opposed to the existing regression-based approaches. The performance of this sparse model for the Cholesky factor, together with our novel estimator, is assessed in a simulation setting, as well as over spatial and temporal real data where a natural ordering arises among the variables. We give guidelines, based on the empirical results, about which of the methods analysed is more appropriate for each setting.Comment: 24 pages, 12 figure

    A sparse decomposition of low rank symmetric positive semi-definite matrices

    Get PDF
    Suppose that A∈RN×NA \in \mathbb{R}^{N \times N} is symmetric positive semidefinite with rank K≤NK \le N. Our goal is to decompose AA into KK rank-one matrices ∑k=1KgkgkT\sum_{k=1}^K g_k g_k^T where the modes {gk}k=1K\{g_{k}\}_{k=1}^K are required to be as sparse as possible. In contrast to eigen decomposition, these sparse modes are not required to be orthogonal. Such a problem arises in random field parametrization where AA is the covariance function and is intractable to solve in general. In this paper, we partition the indices from 1 to NN into several patches and propose to quantify the sparseness of a vector by the number of patches on which it is nonzero, which is called patch-wise sparseness. Our aim is to find the decomposition which minimizes the total patch-wise sparseness of the decomposed modes. We propose a domain-decomposition type method, called intrinsic sparse mode decomposition (ISMD), which follows the "local-modes-construction + patching-up" procedure. The key step in the ISMD is to construct local pieces of the intrinsic sparse modes by a joint diagonalization problem. Thereafter a pivoted Cholesky decomposition is utilized to glue these local pieces together. Optimal sparse decomposition, consistency with different domain decomposition and robustness to small perturbation are proved under the so called regular-sparse assumption (see Definition 1.2). We provide simulation results to show the efficiency and robustness of the ISMD. We also compare the ISMD to other existing methods, e.g., eigen decomposition, pivoted Cholesky decomposition and convex relaxation of sparse principal component analysis [25] and [40]

    Bayesian Semiparametric Multivariate Density Deconvolution

    Full text link
    We consider the problem of multivariate density deconvolution when the interest lies in estimating the distribution of a vector-valued random variable but precise measurements of the variable of interest are not available, observations being contaminated with additive measurement errors. The existing sparse literature on the problem assumes the density of the measurement errors to be completely known. We propose robust Bayesian semiparametric multivariate deconvolution approaches when the measurement error density is not known but replicated proxies are available for each unobserved value of the random vector. Additionally, we allow the variability of the measurement errors to depend on the associated unobserved value of the vector of interest through unknown relationships which also automatically includes the case of multivariate multiplicative measurement errors. Basic properties of finite mixture models, multivariate normal kernels and exchangeable priors are exploited in many novel ways to meet the modeling and computational challenges. Theoretical results that show the flexibility of the proposed methods are provided. We illustrate the efficiency of the proposed methods in recovering the true density of interest through simulation experiments. The methodology is applied to estimate the joint consumption pattern of different dietary components from contaminated 24 hour recalls

    Compressing Positive Semidefinite Operators with Sparse/Localized Bases

    Get PDF
    Given a positive semidefinite (PSD) operator, such as a PSD matrix, an elliptic operator with rough coefficients, a covariance operator of a random field, or the Hamiltonian of a quantum system, we would like to find its best finite rank approximation with a given rank. One way to achieve this objective is to project the operator to its eigenspace that corresponds to the smallest or largest eigenvalues, depending on the setting. The eigenfunctions are typically global, i.e. nonzero almost everywhere, but our interest is to find the sparsest or most localized bases for these subspaces. The sparse/localized basis functions lead to better physical interpretation and preserve some sparsity structure in the original operator. Moreover, sparse/localized basis functions also enable us to develop more efficient numerical algorithms to solve these problems. In this thesis, we present two methods for this purpose, namely the sparse operator compression (Sparse OC) and the intrinsic sparse mode decomposition (ISMD). The Sparse OC is a general strategy to construct finite rank approximations to PSD operators, for which the range space of the finite rank approximation is spanned by a set of sparse/localized basis functions. The basis functions are energy minimizing functions on local patches. When applied to approximate the solution operator of elliptic operators with rough coefficients and various homogeneous boundary conditions, the Sparse OC achieves the optimal convergence rate with nearly optimally localized basis functions. Our localized basis functions can be used as multiscale basis functions to solve elliptic equations with multiscale coefficients and provide the optimal convergence rate O(hk) for 2k'th order elliptic problems in the energy norm. From the perspective of operator compression, these localized basis functions provide an efficient and optimal way to approximate the principal eigen-space of the elliptic operators. From the perspective of the Sparse PCA, we can approximate a large set of covariance functions by a rank-n operator with a localized basis and with the optimal accuracy. While the Sparse OC works well on the solution operator of elliptic operators, we also propose the ISMD that works well on low rank or nearly low rank PSD operators. Given a rank-n PSD operator, say a N-by-N PSD matrix A (n ≤ N), the ISMD decomposes it into n rank-one matrices Σni=1gigTi where the mode {gi}ni=1 are required to be as sparse as possible. Under the regular-sparse assumption (see Definition 1.3.2), we have proved that the ISMD gives the optimal patchwise sparse decomposition, and is stable to small perturbations in the matrix to be decomposed. We provide several applications in both the physical and data sciences to demonstrate the effectiveness of the proposed strategies.</p

    Biostatistical modeling and analysis of combined fMRI and EEG measurements

    Get PDF
    The purpose of brain mapping is to advance the understanding of the relationship between structure and function in the human brain. Several techniques---with different advantages and disadvantages---exist for recording neural activity. Functional magnetic resonance imaging (fMRI) has a high spatial resolution, but low temporal resolution. It also suffers from a low-signal-to-noise ratio in event-related experimental designs, which are commonly used to investigate neuronal brain activity. On the other hand, the high temporal resolution of electroencephalography (EEG) recordings allows to capture provoked event-related potentials. Though, 3D maps derived by EEG source reconstruction methods have a low spatial resolution, they provide complementary information about the location of neuronal activity. There is a strong interest in combining data from both modalities to gain a deeper knowledge of brain functioning through advanced statistical modeling. In this thesis, a new Bayesian method is proposed for enhancing fMRI activation detection by the use of EEG-based spatial prior information in stimulus based experimental paradigms. This method builds upon a newly developed mere fMRI activation detection method. In general, activation detection corresponds to stimulus predictor components having an effect on the fMRI signal trajectory in a voxelwise linear model. We model and analyze stimulus influence by a spatial Bayesian variable selection scheme, and extend existing high-dimensional regression methods by incorporating prior information on binary selection indicators via a latent probit regression. For mere fMRI activation detection, the predictor consists of a spatially-varying intercept only. For EEG-enhanced schemes, an EEG effect is added, which is either chosen to be spatially-varying or constant. Spatially-varying effects are regularized by different Markov random field priors. Statistical inference in resulting high-dimensional hierarchical models becomes rather challenging from a modeling perspective as well as with regard to numerical issues. In this thesis, inference is based on a Markov Chain Monte Carlo (MCMC) approach relying on global updates of effect maps. Additionally, a faster algorithm is developed based on single-site updates to circumvent the computationally intensive, high-dimensional, sparse Cholesky decompositions. The proposed algorithms are examined in both simulation studies and real-world applications. Performance is evaluated in terms of convergency properties, the ability to produce interpretable results, and the sensitivity and specificity of corresponding activation classification rules. The main question is whether the use of EEG information can increase the power of fMRI models to detect activated voxels. In summary, the new algorithms show a substantial increase in sensitivity compared to existing fMRI activation detection methods like classical SPM. Carefully selected EEG-prior information additionally increases sensitivity in activation regions that have been distorted by a low signal-to-noise ratio
    • …
    corecore