9,046 research outputs found
Fast Fourier Sparsity Testing
A function is -sparse if it has at
most non-zero Fourier coefficients. Motivated by applications to fast
sparse Fourier transforms over , we study efficient algorithms
for the problem of approximating the -distance from a given function to
the closest -sparse function. While previous works (e.g., Gopalan et al.
SICOMP 2011) study the problem of distinguishing -sparse functions from
those that are far from -sparse under Hamming distance, to the best of our
knowledge no prior work has explicitly focused on the more general problem of
distance estimation in the setting, which is particularly
well-motivated for noisy Fourier spectra. Given the focus on efficiency, our
main result is an algorithm that solves this problem with query complexity
for constant accuracy and error parameters, which is only
quadratically worse than applicable lower bounds
Blur resolved OCT: full-range interferometric synthetic aperture microscopy through dispersion encoding
We present a computational method for full-range interferometric synthetic
aperture microscopy (ISAM) under dispersion encoding. With this, one can
effectively double the depth range of optical coherence tomography (OCT),
whilst dramatically enhancing the spatial resolution away from the focal plane.
To this end, we propose a model-based iterative reconstruction (MBIR) method,
where ISAM is directly considered in an optimization approach, and we make the
discovery that sparsity promoting regularization effectively recovers the
full-range signal. Within this work, we adopt an optimal nonuniform discrete
fast Fourier transform (NUFFT) implementation of ISAM, which is both fast and
numerically stable throughout iterations. We validate our method with several
complex samples, scanned with a commercial SD-OCT system with no hardware
modification. With this, we both demonstrate full-range ISAM imaging, and
significantly outperform combinations of existing methods.Comment: 17 pages, 7 figures. The images have been compressed for arxiv -
please follow DOI for full resolutio
Sparse Bayesian mass-mapping with uncertainties: hypothesis testing of structure
A crucial aspect of mass-mapping, via weak lensing, is quantification of the
uncertainty introduced during the reconstruction process. Properly accounting
for these errors has been largely ignored to date. We present results from a
new method that reconstructs maximum a posteriori (MAP) convergence maps by
formulating an unconstrained Bayesian inference problem with Laplace-type
-norm sparsity-promoting priors, which we solve via convex
optimization. Approaching mass-mapping in this manner allows us to exploit
recent developments in probability concentration theory to infer theoretically
conservative uncertainties for our MAP reconstructions, without relying on
assumptions of Gaussianity. For the first time these methods allow us to
perform hypothesis testing of structure, from which it is possible to
distinguish between physical objects and artifacts of the reconstruction. Here
we present this new formalism, demonstrate the method on illustrative examples,
before applying the developed formalism to two observational datasets of the
Abel-520 cluster. In our Bayesian framework it is found that neither Abel-520
dataset can conclusively determine the physicality of individual local massive
substructure at significant confidence. However, in both cases the recovered
MAP estimators are consistent with both sets of data
HYDRA: Hybrid Deep Magnetic Resonance Fingerprinting
Purpose: Magnetic resonance fingerprinting (MRF) methods typically rely on
dictio-nary matching to map the temporal MRF signals to quantitative tissue
parameters. Such approaches suffer from inherent discretization errors, as well
as high computational complexity as the dictionary size grows. To alleviate
these issues, we propose a HYbrid Deep magnetic ResonAnce fingerprinting
approach, referred to as HYDRA.
Methods: HYDRA involves two stages: a model-based signature restoration phase
and a learning-based parameter restoration phase. Signal restoration is
implemented using low-rank based de-aliasing techniques while parameter
restoration is performed using a deep nonlocal residual convolutional neural
network. The designed network is trained on synthesized MRF data simulated with
the Bloch equations and fast imaging with steady state precession (FISP)
sequences. In test mode, it takes a temporal MRF signal as input and produces
the corresponding tissue parameters.
Results: We validated our approach on both synthetic data and anatomical data
generated from a healthy subject. The results demonstrate that, in contrast to
conventional dictionary-matching based MRF techniques, our approach
significantly improves inference speed by eliminating the time-consuming
dictionary matching operation, and alleviates discretization errors by
outputting continuous-valued parameters. We further avoid the need to store a
large dictionary, thus reducing memory requirements.
Conclusions: Our approach demonstrates advantages in terms of inference
speed, accuracy and storage requirements over competing MRF method
Message Passing Algorithms for Compressed Sensing
Compressed sensing aims to undersample certain high-dimensional signals, yet
accurately reconstruct them by exploiting signal characteristics. Accurate
reconstruction is possible when the object to be recovered is sufficiently
sparse in a known basis. Currently, the best known sparsity-undersampling
tradeoff is achieved when reconstructing by convex optimization -- which is
expensive in important large-scale applications. Fast iterative thresholding
algorithms have been intensively studied as alternatives to convex optimization
for large-scale problems. Unfortunately known fast algorithms offer
substantially worse sparsity-undersampling tradeoffs than convex optimization.
We introduce a simple costless modification to iterative thresholding making
the sparsity-undersampling tradeoff of the new algorithms equivalent to that of
the corresponding convex optimization procedures. The new
iterative-thresholding algorithms are inspired by belief propagation in
graphical models. Our empirical measurements of the sparsity-undersampling
tradeoff for the new algorithms agree with theoretical calculations. We show
that a state evolution formalism correctly derives the true
sparsity-undersampling tradeoff. There is a surprising agreement between
earlier calculations based on random convex polytopes and this new, apparently
very different theoretical formalism.Comment: 6 pages paper + 9 pages supplementary information, 13 eps figure.
Submitted to Proc. Natl. Acad. Sci. US
- …