1,711 research outputs found
MR image reconstruction using deep density priors
Algorithms for Magnetic Resonance (MR) image reconstruction from undersampled
measurements exploit prior information to compensate for missing k-space data.
Deep learning (DL) provides a powerful framework for extracting such
information from existing image datasets, through learning, and then using it
for reconstruction. Leveraging this, recent methods employed DL to learn
mappings from undersampled to fully sampled images using paired datasets,
including undersampled and corresponding fully sampled images, integrating
prior knowledge implicitly. In this article, we propose an alternative approach
that learns the probability distribution of fully sampled MR images using
unsupervised DL, specifically Variational Autoencoders (VAE), and use this as
an explicit prior term in reconstruction, completely decoupling the encoding
operation from the prior. The resulting reconstruction algorithm enjoys a
powerful image prior to compensate for missing k-space data without requiring
paired datasets for training nor being prone to associated sensitivities, such
as deviations in undersampling patterns used in training and test time or coil
settings. We evaluated the proposed method with T1 weighted images from a
publicly available dataset, multi-coil complex images acquired from healthy
volunteers (N=8) and images with white matter lesions. The proposed algorithm,
using the VAE prior, produced visually high quality reconstructions and
achieved low RMSE values, outperforming most of the alternative methods on the
same dataset. On multi-coil complex data, the algorithm yielded accurate
magnitude and phase reconstruction results. In the experiments on images with
white matter lesions, the method faithfully reconstructed the lesions.
Keywords: Reconstruction, MRI, prior probability, machine learning, deep
learning, unsupervised learning, density estimationComment: Published in IEEE TMI. Main text and supplementary material, 19 pages
tota
Analysis of SparseHash: an efficient embedding of set-similarity via sparse projections
Embeddings provide compact representations of signals in order to perform
efficient inference in a wide variety of tasks. In particular, random
projections are common tools to construct Euclidean distance-preserving
embeddings, while hashing techniques are extensively used to embed
set-similarity metrics, such as the Jaccard coefficient. In this letter, we
theoretically prove that a class of random projections based on sparse
matrices, called SparseHash, can preserve the Jaccard coefficient between the
supports of sparse signals, which can be used to estimate set similarities.
Moreover, besides the analysis, we provide an efficient implementation and we
test the performance in several numerical experiments, both on synthetic and
real datasets.Comment: 25 pages, 6 figure
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
- …