170 research outputs found

    A Bayesian method with reparameterization for diffusion tensor imaging

    Full text link

    Procrustes analysis for diffusion tensor image processing

    Get PDF
    There is an increasing need to develop processing tools for diffusion tensor image data with the consideration of the non-Euclidean nature of the tensor space. In this paper Procrustes analysis, a non-Euclidean shape analysis tool under similarity transformations (rotation, scaling and translation), is proposed to redefine sample statistics of diffusion tensors. A new anisotropy measure Procrustes Anisotropy (PA) is defined with the full ordinary Procrustes analysis. Comparisons are made with other anisotropy measures including Fractional Anisotropy and Geodesic Anisotropy. The partial generalized Procrustes analysis is extended to a weighted generalized Procrustes framework for averaging sample tensors with different fractions of contributions to the mean tensor. Applications of Procrustes methods to diffusion tensor interpolation and smoothing are compared with Euclidean, Log-Euclidean and Riemannian methods

    Non-Euclidean statistics for covariance matrices with applications to diffusion tensor imaging

    Get PDF
    The statistical analysis of covariance matrix data is considered and, in particular, methodology is discussed which takes into account the nonEuclidean nature of the space of positive semi-definite symmetric matrices. The main motivation for the work is the analysis of diffusion tensors in medical image analysis. The primary focus is on estimation of a mean covariance matrix and, in particular, on the use of Procrustes size-and-shape space. Comparisons are made with other estimation techniques, including using the matrix logarithm, matrix square root and Cholesky decomposition. Applications to diffusion tensor imaging are considered and, in particular, a new measure of fractional anisotropy called Procrustes Anisotropy is discussed

    A Spectral Diffusion Prior for Hyperspectral Image Super-Resolution

    Full text link
    Fusion-based hyperspectral image (HSI) super-resolution aims to produce a high-spatial-resolution HSI by fusing a low-spatial-resolution HSI and a high-spatial-resolution multispectral image. Such a HSI super-resolution process can be modeled as an inverse problem, where the prior knowledge is essential for obtaining the desired solution. Motivated by the success of diffusion models, we propose a novel spectral diffusion prior for fusion-based HSI super-resolution. Specifically, we first investigate the spectrum generation problem and design a spectral diffusion model to model the spectral data distribution. Then, in the framework of maximum a posteriori, we keep the transition information between every two neighboring states during the reverse generative process, and thereby embed the knowledge of trained spectral diffusion model into the fusion problem in the form of a regularization term. At last, we treat each generation step of the final optimization problem as its subproblem, and employ the Adam to solve these subproblems in a reverse sequence. Experimental results conducted on both synthetic and real datasets demonstrate the effectiveness of the proposed approach. The code of the proposed approach will be available on https://github.com/liuofficial/SDP

    The Local Structure of Space-Variant Images

    Full text link
    Local image structure is widely used in theories of both machine and biological vision. The form of the differential operators describing this structure for space-invariant images has been well documented (e.g. Koenderink, 1984). Although space-variant coordinates are universally used in mammalian visual systems, the form of the operators in the space-variant domain has received little attention. In this report we derive the form of the most common differential operators and surface characteristics in the space-variant domain and show examples of their use. The operators include the Laplacian, the gradient and the divergence, as well as the fundamental forms of the image treated as a surface. We illustrate the use of these results by deriving the space-variant form of corner detection and image enhancement algorithms. The latter is shown to have interesting properties in the complex log domain, implicitly encoding a variable grid-size integration of the underlying PDE, allowing rapid enhancement of large scale peripheral features while preserving high spatial frequencies in the fovea.Office of Naval Research (N00014-95-I-0409

    Variational Autoencoders with Riemannian Brownian Motion Priors

    Full text link
    Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads to the common choice of a standard Gaussian prior over continuous latent variables. Recent work has, however, shown that this prior has a detrimental effect on model capacity, leading to subpar performance. We propose that the Euclidean assumption lies at the heart of this failure mode. To counter this, we assume a Riemannian structure over the latent space, which constitutes a more principled geometric view of the latent codes, and replace the standard Gaussian prior with a Riemannian Brownian motion prior. We propose an efficient inference scheme that does not rely on the unknown normalizing factor of this prior. Finally, we demonstrate that this prior significantly increases model capacity using only one additional scalar parameter.Comment: Published in ICML 202
    • …
    corecore