8,572 research outputs found
Diffeomorphic Metric Mapping and Probabilistic Atlas Generation of Hybrid Diffusion Imaging based on BFOR Signal Basis
We propose a large deformation diffeomorphic metric mapping algorithm to
align multiple b-value diffusion weighted imaging (mDWI) data, specifically
acquired via hybrid diffusion imaging (HYDI), denoted as LDDMM-HYDI. We then
propose a Bayesian model for estimating the white matter atlas from HYDIs. We
adopt the work given in Hosseinbor et al. (2012) and represent the q-space
diffusion signal with the Bessel Fourier orientation reconstruction (BFOR)
signal basis. The BFOR framework provides the representation of mDWI in the
q-space and thus reduces memory requirement. In addition, since the BFOR signal
basis is orthonormal, the L2 norm that quantifies the differences in the
q-space signals of any two mDWI datasets can be easily computed as the sum of
the squared differences in the BFOR expansion coefficients. In this work, we
show that the reorientation of the -space signal due to spatial
transformation can be easily defined on the BFOR signal basis. We incorporate
the BFOR signal basis into the LDDMM framework and derive the gradient descent
algorithm for LDDMM-HYDI with explicit orientation optimization. Additionally,
we extend the previous Bayesian atlas estimation framework for scalar-valued
images to HYDIs and derive the expectation-maximization algorithm for solving
the HYDI atlas estimation problem. Using real HYDI datasets, we show the
Bayesian model generates the white matter atlas with anatomical details.
Moreover, we show that it is important to consider the variation of mDWI
reorientation due to a small change in diffeomorphic transformation in the
LDDMM-HYDI optimization and to incorporate the full information of HYDI for
aligning mDWI
Learning Discriminative Stein Kernel for SPD Matrices and Its Applications
Stein kernel has recently shown promising performance on classifying images
represented by symmetric positive definite (SPD) matrices. It evaluates the
similarity between two SPD matrices through their eigenvalues. In this paper,
we argue that directly using the original eigenvalues may be problematic
because: i) Eigenvalue estimation becomes biased when the number of samples is
inadequate, which may lead to unreliable kernel evaluation; ii) More
importantly, eigenvalues only reflect the property of an individual SPD matrix.
They are not necessarily optimal for computing Stein kernel when the goal is to
discriminate different sets of SPD matrices. To address the two issues in one
shot, we propose a discriminative Stein kernel, in which an extra parameter
vector is defined to adjust the eigenvalues of the input SPD matrices. The
optimal parameter values are sought by optimizing a proxy of classification
performance. To show the generality of the proposed method, three different
kernel learning criteria that are commonly used in the literature are employed
respectively as a proxy. A comprehensive experimental study is conducted on a
variety of image classification tasks to compare our proposed discriminative
Stein kernel with the original Stein kernel and other commonly used methods for
evaluating the similarity between SPD matrices. The experimental results
demonstrate that, the discriminative Stein kernel can attain greater
discrimination and better align with classification tasks by altering the
eigenvalues. This makes it produce higher classification performance than the
original Stein kernel and other commonly used methods.Comment: 13 page
Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML
Diffeomorphic Metric Mapping of High Angular Resolution Diffusion Imaging based on Riemannian Structure of Orientation Distribution Functions
In this paper, we propose a novel large deformation diffeomorphic
registration algorithm to align high angular resolution diffusion images
(HARDI) characterized by orientation distribution functions (ODFs). Our
proposed algorithm seeks an optimal diffeomorphism of large deformation between
two ODF fields in a spatial volume domain and at the same time, locally
reorients an ODF in a manner such that it remains consistent with the
surrounding anatomical structure. To this end, we first review the Riemannian
manifold of ODFs. We then define the reorientation of an ODF when an affine
transformation is applied and subsequently, define the diffeomorphic group
action to be applied on the ODF based on this reorientation. We incorporate the
Riemannian metric of ODFs for quantifying the similarity of two HARDI images
into a variational problem defined under the large deformation diffeomorphic
metric mapping (LDDMM) framework. We finally derive the gradient of the cost
function in both Riemannian spaces of diffeomorphisms and the ODFs, and present
its numerical implementation. Both synthetic and real brain HARDI data are used
to illustrate the performance of our registration algorithm
A variational approach to probing extreme events in turbulent dynamical systems
Extreme events are ubiquitous in a wide range of dynamical systems, including
turbulent fluid flows, nonlinear waves, large scale networks and biological
systems. Here, we propose a variational framework for probing conditions that
trigger intermittent extreme events in high-dimensional nonlinear dynamical
systems. We seek the triggers as the probabilistically feasible solutions of an
appropriately constrained optimization problem, where the function to be
maximized is a system observable exhibiting intermittent extreme bursts. The
constraints are imposed to ensure the physical admissibility of the optimal
solutions, i.e., significant probability for their occurrence under the natural
flow of the dynamical system. We apply the method to a body-forced
incompressible Navier--Stokes equation, known as the Kolmogorov flow. We find
that the intermittent bursts of the energy dissipation are independent of the
external forcing and are instead caused by the spontaneous transfer of energy
from large scales to the mean flow via nonlinear triad interactions. The global
maximizer of the corresponding variational problem identifies the responsible
triad, hence providing a precursor for the occurrence of extreme dissipation
events. Specifically, monitoring the energy transfers within this triad, allows
us to develop a data-driven short-term predictor for the intermittent bursts of
energy dissipation. We assess the performance of this predictor through direct
numerical simulations.Comment: Minor revisions, generalized the constraints in Eq. (2
Multireference Alignment is Easier with an Aperiodic Translation Distribution
In the multireference alignment model, a signal is observed by the action of
a random circular translation and the addition of Gaussian noise. The goal is
to recover the signal's orbit by accessing multiple independent observations.
Of particular interest is the sample complexity, i.e., the number of
observations/samples needed in terms of the signal-to-noise ratio (the signal
energy divided by the noise variance) in order to drive the mean-square error
(MSE) to zero. Previous work showed that if the translations are drawn from the
uniform distribution, then, in the low SNR regime, the sample complexity of the
problem scales as . In this work, using a
generalization of the Chapman--Robbins bound for orbits and expansions of the
divergence at low SNR, we show that in the same regime the sample
complexity for any aperiodic translation distribution scales as
. This rate is achieved by a simple spectral algorithm.
We propose two additional algorithms based on non-convex optimization and
expectation-maximization. We also draw a connection between the multireference
alignment problem and the spiked covariance model
- …