1,266 research outputs found
A Variable Density Sampling Scheme for Compressive Fourier Transform Interferometry
Fourier Transform Interferometry (FTI) is an appealing Hyperspectral (HS)
imaging modality for many applications demanding high spectral resolution,
e.g., in fluorescence microscopy. However, the effective resolution of FTI is
limited by the durability of biological elements when exposed to illuminating
light. Overexposed elements are subject to photo-bleaching and become unable to
fluoresce. In this context, the acquisition of biological HS volumes based on
sampling the Optical Path Difference (OPD) axis at Nyquist rate leads to
unpleasant trade-offs between spectral resolution, quality of the HS volume,
and light exposure intensity. We propose two variants of the FTI imager, i.e.,
Coded Illumination-FTI (CI-FTI) and Structured Illumination FTI (SI-FTI), based
on the theory of compressive sensing (CS). These schemes efficiently modulate
light exposure temporally (in CI-FTI) or spatiotemporally (in SI-FTI).
Leveraging a variable density sampling strategy recently introduced in CS, we
provide near-optimal illumination strategies, so that the light exposure
imposed on a biological specimen is minimized while the spectral resolution is
preserved. Our analysis focuses on two criteria: (i) a trade-off between
exposure intensity and the quality of the reconstructed HS volume for a given
spectral resolution; (ii) maximizing HS volume quality for a fixed spectral
resolution and constrained exposure budget. Our contributions can be adapted to
an FTI imager without hardware modifications. The reconstruction of HS volumes
from CS-FTI measurements relies on an -norm minimization problem promoting
a spatiospectral sparsity prior. Numerically, we support the proposed methods
on synthetic data and simulated CS measurements (from actual FTI measurements)
under various scenarios. In particular, the biological HS volumes can be
reconstructed with a three-to-ten-fold reduction in the light exposure.Comment: 45 pages, 11 figure
How to find real-world applications for compressive sensing
The potential of compressive sensing (CS) has spurred great interest in the
research community and is a fast growing area of research. However, research
translating CS theory into practical hardware and demonstrating clear and
significant benefits with this hardware over current, conventional imaging
techniques has been limited. This article helps researchers to find those niche
applications where the CS approach provides substantial gain over conventional
approaches by articulating lessons learned in finding one such application; sea
skimming missile detection. As a proof of concept, it is demonstrated that a
simplified CS missile detection architecture and algorithm provides comparable
results to the conventional imaging approach but using a smaller FPA. The
primary message is that all of the excitement surrounding CS is necessary and
appropriate for encouraging our creativity but we all must also take off our
"rose colored glasses" and critically judge our ideas, methods and results
relative to conventional imaging approaches.Comment: 10 page
Compressive Sensing Theory for Optical Systems Described by a Continuous Model
A brief survey of the author and collaborators' work in compressive sensing
applications to continuous imaging models.Comment: Chapter 3 of "Optical Compressive Imaging" edited by Adrian Stern
published by Taylor & Francis 201
Distributed and parallel sparse convex optimization for radio interferometry with PURIFY
Next generation radio interferometric telescopes are entering an era of big
data with extremely large data sets. While these telescopes can observe the sky
in higher sensitivity and resolution than before, computational challenges in
image reconstruction need to be overcome to realize the potential of
forthcoming telescopes. New methods in sparse image reconstruction and convex
optimization techniques (cf. compressive sensing) have shown to produce higher
fidelity reconstructions of simulations and real observations than traditional
methods. This article presents distributed and parallel algorithms and
implementations to perform sparse image reconstruction, with significant
practical considerations that are important for implementing these algorithms
for Big Data. We benchmark the algorithms presented, showing that they are
considerably faster than their serial equivalents. We then pre-sample gridding
kernels to scale the distributed algorithms to larger data sizes, showing
application times for 1 Gb to 2.4 Tb data sets over 25 to 100 nodes for up to
50 billion visibilities, and find that the run-times for the distributed
algorithms range from 100 milliseconds to 3 minutes per iteration. This work
presents an important step in working towards computationally scalable and
efficient algorithms and implementations that are needed to image observations
of both extended and compact sources from next generation radio interferometers
such as the SKA. The algorithms are implemented in the latest versions of the
SOPT (https://github.com/astro-informatics/sopt) and PURIFY
(https://github.com/astro-informatics/purify) software packages {(Versions
3.1.0)}, which have been released alongside of this article.Comment: 25 pages, 5 figure
- …