856 research outputs found

    Barycentric Subspace Analysis on Manifolds

    Full text link
    This paper investigates the generalization of Principal Component Analysis (PCA) to Riemannian manifolds. We first propose a new and general type of family of subspaces in manifolds that we call barycentric subspaces. They are implicitly defined as the locus of points which are weighted means of k+1k+1 reference points. As this definition relies on points and not on tangent vectors, it can also be extended to geodesic spaces which are not Riemannian. For instance, in stratified spaces, it naturally allows principal subspaces that span several strata, which is impossible in previous generalizations of PCA. We show that barycentric subspaces locally define a submanifold of dimension k which generalizes geodesic subspaces.Second, we rephrase PCA in Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy of properly embedded linear subspaces of increasing dimension). We show that the Euclidean PCA minimizes the Accumulated Unexplained Variances by all the subspaces of the flag (AUV). Barycentric subspaces are naturally nested, allowing the construction of hierarchically nested subspaces. Optimizing the AUV criterion to optimally approximate data points with flags of affine spans in Riemannian manifolds lead to a particularly appealing generalization of PCA on manifolds called Barycentric Subspaces Analysis (BSA).Comment: Annals of Statistics, Institute of Mathematical Statistics, A Para\^itr

    Bootstrap Multigrid for the Laplace-Beltrami Eigenvalue Problem

    Full text link
    This paper introduces bootstrap two-grid and multigrid finite element approximations to the Laplace-Beltrami (surface Laplacian) eigen-problem on a closed surface. The proposed multigrid method is suitable for recovering eigenvalues having large multiplicity, computing interior eigenvalues, and approximating the shifted indefinite eigen-problem. Convergence analysis is carried out for a simplified two-grid algorithm and numerical experiments are presented to illustrate the basic components and ideas behind the overall bootstrap multigrid approach

    REITERATIVE MINIMUM MEAN SQUARE ERROR ESTIMATOR FOR DIRECTION OF ARRIVAL ESTIMATION AND BIOMEDICAL FUNCTIONAL BRAIN IMAGING

    Get PDF
    Two novel approaches are developed for direction-of-arrival (DOA) estimation and functional brain imaging estimation, which are denoted as ReIterative Super-Resolution (RISR) and Source AFFine Image REconstruction (SAFFIRE), respectively. Both recursive approaches are based on a minimum mean-square error (MMSE) framework. The RISR estimator recursively determines an optimal filter bank by updating an estimate of the spatial power distribution at each successive stage. Unlike previous non-parametric covariance-based approaches, which require numerous time snapshots of data, RISR is a parametric approach thus enabling operation on as few as one time snapshot, thereby yielding very high temporal resolution and robustness to the deleterious effects of temporal correlation. RISR has been found to resolve distinct spatial sources several times better than that afforded by the nominal array resolution even under conditions of temporally correlated sources and spatially colored noise. The SAFFIRE algorithm localizes the underlying neural activity in the brain based on the response of a patient under sensory stimuli, such as an auditory tone. The estimator processes electroencephalography (EEG) or magnetoencephalography (MEG) data simulated for sensors outside the patient's head in a recursive manner converging closer to the true solution at each consecutive stage. The algorithm requires a minimal number of time samples to localize active neural sources, thereby enabling the observation of the neural activity as it progresses over time. SAFFIRE has been applied to simulated MEG data and has shown to achieve unprecedented spatial and temporal resolution. The estimation approach has also demonstrated the capability to precisely isolate the primary and secondary auditory cortex responses, a challenging problem in the brain MEG imaging community

    Efficient Clustering on Riemannian Manifolds: A Kernelised Random Projection Approach

    Get PDF
    Reformulating computer vision problems over Riemannian manifolds has demonstrated superior performance in various computer vision applications. This is because visual data often forms a special structure lying on a lower dimensional space embedded in a higher dimensional space. However, since these manifolds belong to non-Euclidean topological spaces, exploiting their structures is computationally expensive, especially when one considers the clustering analysis of massive amounts of data. To this end, we propose an efficient framework to address the clustering problem on Riemannian manifolds. This framework implements random projections for manifold points via kernel space, which can preserve the geometric structure of the original space, but is computationally efficient. Here, we introduce three methods that follow our framework. We then validate our framework on several computer vision applications by comparing against popular clustering methods on Riemannian manifolds. Experimental results demonstrate that our framework maintains the performance of the clustering whilst massively reducing computational complexity by over two orders of magnitude in some cases

    Inference for eigenvalues and eigenvectors of Gaussian symmetric matrices

    Full text link
    This article presents maximum likelihood estimators (MLEs) and log-likelihood ratio (LLR) tests for the eigenvalues and eigenvectors of Gaussian random symmetric matrices of arbitrary dimension, where the observations are independent repeated samples from one or two populations. These inference problems are relevant in the analysis of diffusion tensor imaging data and polarized cosmic background radiation data, where the observations are, respectively, 3×33\times3 and 2×22\times2 symmetric positive definite matrices. The parameter sets involved in the inference problems for eigenvalues and eigenvectors are subsets of Euclidean space that are either affine subspaces, embedded submanifolds that are invariant under orthogonal transformations or polyhedral convex cones. We show that for a class of sets that includes the ones considered in this paper, the MLEs of the mean parameter do not depend on the covariance parameters if and only if the covariance structure is orthogonally invariant. Closed-form expressions for the MLEs and the associated LLRs are derived for this covariance structure.Comment: Published in at http://dx.doi.org/10.1214/08-AOS628 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore