1,414 research outputs found

    Manifold interpolation and model reduction

    Full text link
    One approach to parametric and adaptive model reduction is via the interpolation of orthogonal bases, subspaces or positive definite system matrices. In all these cases, the sampled inputs stem from matrix sets that feature a geometric structure and thus form so-called matrix manifolds. This work will be featured as a chapter in the upcoming Handbook on Model Order Reduction (P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W.H.A. Schilders, L.M. Silveira, eds, to appear on DE GRUYTER) and reviews the numerical treatment of the most important matrix manifolds that arise in the context of model reduction. Moreover, the principal approaches to data interpolation and Taylor-like extrapolation on matrix manifolds are outlined and complemented by algorithms in pseudo-code.Comment: 37 pages, 4 figures, featured chapter of upcoming "Handbook on Model Order Reduction

    Geometry-Aware Neighborhood Search for Learning Local Models for Image Reconstruction

    Get PDF
    Local learning of sparse image models has proven to be very effective to solve inverse problems in many computer vision applications. To learn such models, the data samples are often clustered using the K-means algorithm with the Euclidean distance as a dissimilarity metric. However, the Euclidean distance may not always be a good dissimilarity measure for comparing data samples lying on a manifold. In this paper, we propose two algorithms for determining a local subset of training samples from which a good local model can be computed for reconstructing a given input test sample, where we take into account the underlying geometry of the data. The first algorithm, called Adaptive Geometry-driven Nearest Neighbor search (AGNN), is an adaptive scheme which can be seen as an out-of-sample extension of the replicator graph clustering method for local model learning. The second method, called Geometry-driven Overlapping Clusters (GOC), is a less complex nonadaptive alternative for training subset selection. The proposed AGNN and GOC methods are evaluated in image super-resolution, deblurring and denoising applications and shown to outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings.Comment: 15 pages, 10 figures and 5 table

    Including parameter dependence in the data and covariance for cosmological inference

    Full text link
    The final step of most large-scale structure analyses involves the comparison of power spectra or correlation functions to theoretical models. It is clear that the theoretical models have parameter dependence, but frequently the measurements and the covariance matrix depend upon some of the parameters as well. We show that a very simple interpolation scheme from an unstructured mesh allows for an efficient way to include this parameter dependence self-consistently in the analysis at modest computational expense. We describe two schemes for covariance matrices. The scheme which uses the geometric structure of such matrices performs roughly twice as well as the simplest scheme, though both perform very well.Comment: 17 pages, 4 figures, matches version published in JCA

    Doctor of Philosophy in Computing

    Get PDF
    dissertationAn important area of medical imaging research is studying anatomical diffeomorphic shape changes and detecting their relationship to disease processes. For example, neurodegenerative disorders change the shape of the brain, thus identifying differences between the healthy control subjects and patients affected by these diseases can help with understanding the disease processes. Previous research proposed a variety of mathematical approaches for statistical analysis of geometrical brain structure in three-dimensional (3D) medical imaging, including atlas building, brain variability quantification, regression, etc. The critical component in these statistical models is that the geometrical structure is represented by transformations rather than the actual image data. Despite the fact that such statistical models effectively provide a way for analyzing shape variation, none of them have a truly probabilistic interpretation. This dissertation contributes a novel Bayesian framework of statistical shape analysis for generic manifold data and its application to shape variability and brain magnetic resonance imaging (MRI). After we carefully define the distributions on manifolds, we then build Bayesian models for analyzing the intrinsic variability of manifold data, involving the mean point, principal modes, and parameter estimation. Because there is no closed-form solution for Bayesian inference of these models on manifolds, we develop a Markov Chain Monte Carlo method to sample the hidden variables from the distribution. The main advantages of these Bayesian approaches are that they provide parameter estimation and automatic dimensionality reduction for analyzing generic manifold-valued data, such as diffeomorphisms. Modeling the mean point of a group of images in a Bayesian manner allows for learning the regularity parameter from data directly rather than having to set it manually, which eliminates the effort of cross validation for parameter selection. In population studies, our Bayesian model of principal modes analysis (1) automatically extracts a low-dimensional, second-order statistics of manifold data variability and (2) gives a better geometric data fit than nonprobabilistic models. To make this Bayesian framework computationally more efficient for high-dimensional diffeomorphisms, this dissertation presents an algorithm, FLASH (finite-dimensional Lie algebras for shooting), that hugely speeds up the diffeomorphic image registration. Instead of formulating diffeomorphisms in a continuous variational problem, Flash defines a completely new discrete reparameterization of diffeomorphisms in a low-dimensional bandlimited velocity space, which results in the Bayesian inference via sampling on the space of diffeomorphisms being more feasible in time. Our entire Bayesian framework in this dissertation is used for statistical analysis of shape data and brain MRIs. It has the potential to improve hypothesis testing, classification, and mixture models

    Statistical Computing on Non-Linear Spaces for Computational Anatomy

    Get PDF
    International audienceComputational anatomy is an emerging discipline that aims at analyzing and modeling the individual anatomy of organs and their biological variability across a population. However, understanding and modeling the shape of organs is made difficult by the absence of physical models for comparing different subjects, the complexity of shapes, and the high number of degrees of freedom implied. Moreover, the geometric nature of the anatomical features usually extracted raises the need for statistics on objects like curves, surfaces and deformations that do not belong to standard Euclidean spaces. We explain in this chapter how the Riemannian structure can provide a powerful framework to build generic statistical computing tools. We show that few computational tools derive for each Riemannian metric can be used in practice as the basic atoms to build more complex generic algorithms such as interpolation, filtering and anisotropic diffusion on fields of geometric features. This computational framework is illustrated with the analysis of the shape of the scoliotic spine and the modeling of the brain variability from sulcal lines where the results suggest new anatomical findings
    • …
    corecore