6,074 research outputs found
Hybrid-State Free Precession in Nuclear Magnetic Resonance
The dynamics of large spin-1/2 ensembles in the presence of a varying
magnetic field are commonly described by the Bloch equation. Most magnetic
field variations result in unintuitive spin dynamics, which are sensitive to
small deviations in the driving field. Although simplistic field variations can
produce robust dynamics, the captured information content is impoverished.
Here, we identify adiabaticity conditions that span a rich experiment design
space with tractable dynamics. These adiabaticity conditions trap the spin
dynamics in a one-dimensional subspace. Namely, the dynamics is captured by the
absolute value of the magnetization, which is in a transient state, while its
direction adiabatically follows the steady state. We define the hybrid state as
the co-existence of these two states and identify the polar angle as the
effective driving force of the spin dynamics. As an example, we optimize this
drive for robust and efficient quantification of spin relaxation times and
utilize it for magnetic resonance imaging of the human brain
Maxwell-compensated design of asymmetric gradient waveforms for tensor-valued diffusion encoding
Purpose: Asymmetric gradient waveforms are attractive for diffusion encoding
due to their superior efficiency, however, the asymmetry may cause a residual
gradient moment at the end of the encoding. Depending on the experiment setup,
this residual moment may cause significant signal bias and image artifacts. The
purpose of this study was to develop an asymmetric gradient waveform design for
tensor-valued diffusion encoding that is not affected by concomitant gradient.
Methods: The Maxwell index was proposed as a scalar invariant that captures the
effect of concomitant gradients and was constrained in the numerical
optimization to 100 (mT/m)ms to yield Maxwell-compensated waveforms. The
efficacy of this design was tested in an oil phantom, and in a healthy human
brain. For reference, waveforms from literature were included in the analysis.
Simulations were performed to investigate if the design was valid for a wide
range of experiments and if it could predict the signal bias. Results:
Maxwell-compensated waveforms showed no signal bias in oil or in the brain. By
contrast, several waveforms from literature showed gross signal bias. In the
brain, the bias was large enough to markedly affect both signal and parameter
maps, and the bias could be accurately predicted by theory. Conclusion:
Constraining the Maxwell index in the optimization of asymmetric gradient
waveforms yields efficient tensor-valued encoding with concomitant gradients
that have a negligible effect on the signal. This waveform design is especially
relevant in combination with strong gradients, long encoding times, thick
slices, simultaneous multi-slice acquisition and large/oblique FOVs
Barycentric Subspace Analysis on Manifolds
This paper investigates the generalization of Principal Component Analysis
(PCA) to Riemannian manifolds. We first propose a new and general type of
family of subspaces in manifolds that we call barycentric subspaces. They are
implicitly defined as the locus of points which are weighted means of
reference points. As this definition relies on points and not on tangent
vectors, it can also be extended to geodesic spaces which are not Riemannian.
For instance, in stratified spaces, it naturally allows principal subspaces
that span several strata, which is impossible in previous generalizations of
PCA. We show that barycentric subspaces locally define a submanifold of
dimension k which generalizes geodesic subspaces.Second, we rephrase PCA in
Euclidean spaces as an optimization on flags of linear subspaces (a hierarchy
of properly embedded linear subspaces of increasing dimension). We show that
the Euclidean PCA minimizes the Accumulated Unexplained Variances by all the
subspaces of the flag (AUV). Barycentric subspaces are naturally nested,
allowing the construction of hierarchically nested subspaces. Optimizing the
AUV criterion to optimally approximate data points with flags of affine spans
in Riemannian manifolds lead to a particularly appealing generalization of PCA
on manifolds called Barycentric Subspaces Analysis (BSA).Comment: Annals of Statistics, Institute of Mathematical Statistics, A
Para\^itr
Spherical Regression: Learning Viewpoints, Surface Normals and 3D Rotations on n-Spheres
Many computer vision challenges require continuous outputs, but tend to be
solved by discrete classification. The reason is classification's natural
containment within a probability -simplex, as defined by the popular softmax
activation function. Regular regression lacks such a closed geometry, leading
to unstable training and convergence to suboptimal local minima. Starting from
this insight we revisit regression in convolutional neural networks. We observe
many continuous output problems in computer vision are naturally contained in
closed geometrical manifolds, like the Euler angles in viewpoint estimation or
the normals in surface normal estimation. A natural framework for posing such
continuous output problems are -spheres, which are naturally closed
geometric manifolds defined in the space. By introducing a
spherical exponential mapping on -spheres at the regression output, we
obtain well-behaved gradients, leading to stable training. We show how our
spherical regression can be utilized for several computer vision challenges,
specifically viewpoint estimation, surface normal estimation and 3D rotation
estimation. For all these problems our experiments demonstrate the benefit of
spherical regression. All paper resources are available at
https://github.com/leoshine/Spherical_Regression.Comment: CVPR 2019 camera read
- …