22 research outputs found
Deep MR Fingerprinting with total-variation and low-rank subspace priors
Deep learning (DL) has recently emerged to address the heavy storage and
computation requirements of the baseline dictionary-matching (DM) for Magnetic
Resonance Fingerprinting (MRF) reconstruction. Fed with non-iterated
back-projected images, the network is unable to fully resolve
spatially-correlated corruptions caused from the undersampling artefacts. We
propose an accelerated iterative reconstruction to minimize these artefacts
before feeding into the network. This is done through a convex regularization
that jointly promotes spatio-temporal regularities of the MRF time-series.
Except for training, the rest of the parameter estimation pipeline is
dictionary-free. We validate the proposed approach on synthetic and in-vivo
datasets
Model Selection with Low Complexity Priors
Regularization plays a pivotal role when facing the challenge of solving
ill-posed inverse problems, where the number of observations is smaller than
the ambient dimension of the object to be estimated. A line of recent work has
studied regularization models with various types of low-dimensional structures.
In such settings, the general approach is to solve a regularized optimization
problem, which combines a data fidelity term and some regularization penalty
that promotes the assumed low-dimensional/simple structure. This paper provides
a general framework to capture this low-dimensional structure through what we
coin partly smooth functions relative to a linear manifold. These are convex,
non-negative, closed and finite-valued functions that will promote objects
living on low-dimensional subspaces. This class of regularizers encompasses
many popular examples such as the L1 norm, L1-L2 norm (group sparsity), as well
as several others including the Linfty norm. We also show that the set of
partly smooth functions relative to a linear manifold is closed under addition
and pre-composition by a linear operator, which allows to cover mixed
regularization, and the so-called analysis-type priors (e.g. total variation,
fused Lasso, finite-valued polyhedral gauges). Our main result presents a
unified sharp analysis of exact and robust recovery of the low-dimensional
subspace model associated to the object to recover from partial measurements.
This analysis is illustrated on a number of special and previously studied
cases, and on an analysis of the performance of Linfty regularization in a
compressed sensing scenario
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
A Plug-and-Play Approach To Multiparametric Quantitative MRI:Image Reconstruction Using Pre-Trained Deep Denoisers
Current spatiotemporal deep learning approaches to Magnetic Resonance
Fingerprinting (MRF) build artefact-removal models customised to a particular
k-space subsampling pattern which is used for fast (compressed) acquisition.
This may not be useful when the acquisition process is unknown during training
of the deep learning model and/or changes during testing time. This paper
proposes an iterative deep learning plug-and-play reconstruction approach to
MRF which is adaptive to the forward acquisition process. Spatiotemporal image
priors are learned by an image denoiser i.e. a Convolutional Neural Network
(CNN), trained to remove generic white gaussian noise (not a particular
subsampling artefact) from data. This CNN denoiser is then used as a
data-driven shrinkage operator within the iterative reconstruction algorithm.
This algorithm with the same denoiser model is then tested on two simulated
acquisition processes with distinct subsampling patterns. The results show
consistent de-aliasing performance against both acquisition schemes and
accurate mapping of tissues' quantitative bio-properties. Software available:
https://github.com/ketanfatania/QMRI-PnP-Recon-PO
Nonlinear Equivariant Imaging:Learning Multi-Parametric Tissue Mapping without Ground Truth for Compressive Quantitative MRI
Current state-of-the-art reconstruction for quantitative tissue maps from fast, compressive, Magnetic Resonance Fingerprinting (MRF), use supervised deep learning, with the drawback of requiring high-fidelity ground truth tissue map training data which is limited. This paper proposes NonLinear Equivariant Imaging for MRF (NLEIMRF), a self-supervised learning approach to eliminate the need for ground truth for deep MRF image reconstruction. NLEI-MRF extends the recent Equivariant Imaging framework to the MRF nonlinear inverse problem. Only compressed-sampled MRF scans are used for training. NLEI-MRF learns tissue mapping using spatiotemporal priors: spatial priors are obtained from the invariance of MRF data to a group of geometric image transformations, while temporal priors are obtained from a nonlinear Bloch response model approximated by a pre-trained neural network. Tested retrospectively on two acquisition settings, we observe that NLEI-MRF closely approaches the performance of supervised learning.</p
Deep Image Priors for Magnetic Resonance Fingerprinting with pretrained Bloch-consistent denoising autoencoders
The estimation of multi-parametric quantitative maps from Magnetic Resonance
Fingerprinting (MRF) compressed sampled acquisitions, albeit successful,
remains a challenge due to the high underspampling rate and artifacts naturally
occuring during image reconstruction. Whilst state-of-the-art DL methods can
successfully address the task, to fully exploit their capabilities they often
require training on a paired dataset, in an area where ground truth is seldom
available. In this work, we propose a method that combines a deep image prior
(DIP) module that, without ground truth and in conjunction with a Bloch
consistency enforcing autoencoder, can tackle the problem, resulting in a
method faster and of equivalent or better accuracy than DIP-MRF.Comment: 4 pages, 3 figures 1 table, presented at ISBI 202
A Plug-and-Play Approach To Multiparametric Quantitative MRI:Image Reconstruction Using Pre-Trained Deep Denoisers
Current spatiotemporal deep learning approaches to Magnetic Resonance Fingerprinting (MRF) build artefact-removal models customised to a particular k-space subsampling pattern which is used for fast (compressed) acquisition. This may not be useful when the acquisition process is unknown during training of the deep learning model and/or changes during testing time. This paper proposes an iterative deep learning plug-and-play reconstruction approach to MRF which is adaptive to the forward acquisition process. Spatiotemporal image priors are learned by an image denoiser i.e. a Convolutional Neural Network (CNN), trained to remove generic white gaussian noise (not a particular subsampling artefact) from data. This CNN denoiser is then used as a data-driven shrinkage operator within the iterative reconstruction algorithm. This algorithm with the same denoiser model is then tested on two simulated acquisition processes with distinct subsampling patterns. The results show consistent dealiasing performance against both acquisition schemes and accurate mapping of tissues' quantitative bio-properties. Software available: https://github.com/ketanfatania/QMRI-PnP-Recon-POC</p
