208 research outputs found
Sparse Randomized Kaczmarz for Support Recovery of Jointly Sparse Corrupted Multiple Measurement Vectors
While single measurement vector (SMV) models have been widely studied in
signal processing, there is a surging interest in addressing the multiple
measurement vectors (MMV) problem. In the MMV setting, more than one
measurement vector is available and the multiple signals to be recovered share
some commonalities such as a common support. Applications in which MMV is a
naturally occurring phenomenon include online streaming, medical imaging, and
video recovery. This work presents a stochastic iterative algorithm for the
support recovery of jointly sparse corrupted MMV. We present a variant of the
Sparse Randomized Kaczmarz algorithm for corrupted MMV and compare our proposed
method with an existing Kaczmarz type algorithm for MMV problems. We also
showcase the usefulness of our approach in the online (streaming) setting and
provide empirical evidence that suggests the robustness of the proposed method
to the distribution of the corruption and the number of corruptions occurring.Comment: 13 pages, 6 figure
Image reconstruction in fluorescence molecular tomography with sparsity-initialized maximum-likelihood expectation maximization
We present a reconstruction method involving maximum-likelihood expectation
maximization (MLEM) to model Poisson noise as applied to fluorescence molecular
tomography (FMT). MLEM is initialized with the output from a sparse
reconstruction-based approach, which performs truncated singular value
decomposition-based preconditioning followed by fast iterative
shrinkage-thresholding algorithm (FISTA) to enforce sparsity. The motivation
for this approach is that sparsity information could be accounted for within
the initialization, while MLEM would accurately model Poisson noise in the FMT
system. Simulation experiments show the proposed method significantly improves
images qualitatively and quantitatively. The method results in over 20 times
faster convergence compared to uniformly initialized MLEM and improves
robustness to noise compared to pure sparse reconstruction. We also
theoretically justify the ability of the proposed approach to reduce noise in
the background region compared to pure sparse reconstruction. Overall, these
results provide strong evidence to model Poisson noise in FMT reconstruction
and for application of the proposed reconstruction framework to FMT imaging
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Quantum-inspired computational imaging
Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip
(An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods
Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'
Propagation of uncertainty in atmospheric parameters to hyperspectral unmixing
Atmospheric correction (AC) is important in pre-processing of airborne hyperspectral imagery. AC requires knowledge on the atmospheric state expressed by atmospheric condition parameters. Their values are affected by uncertainties that propagate to the application level. This study investigates the propagation of uncertainty from column water vapor (CWV) and aerosol optical depth (AOD) towards abundance maps obtained by means of spectral unmixing. Both Fully Constrained Least Squares (FCLS) and FCLS with Total Variation (FCLS-TV) are applied. We use five simulated datasets contaminated by various noise levels. Three datasets cover two spectral scenarios with different endmembers. A univariate and a bivariate analysis are carried out on CWV and AOD. The other two datasets are used to analyze the effect of surface albedo. The analysis identifies trends in performance degradation caused by the gradual shift in parameter values from their true value. The maximum achievable performance depends upon spectral characteristics of the datasets, noise level, and surface albedo. As expected, under noisy conditions FCLS-TV performs better than FCLS. Our research opens new perspectives for applications where estimation of reflectance is so far considered to be deterministic
Advanced sparse optimization algorithms for interferometric imaging inverse problems in astronomy
In the quest to produce images of the sky at unprecedented resolution with high
sensitivity, new generation of astronomical interferometers have been designed. To
meet the sensing capabilities of these instruments, techniques aiming to recover the
sought images from the incompletely sampled Fourier domain measurements need to
be reinvented. This goes hand-in-hand with the necessity to calibrate the measurement modulating unknown effects, which adversely affect the image quality, limiting
its dynamic range. The contribution of this thesis consists in the development of
advanced optimization techniques tailored to address these issues, ranging from radio
interferometry (RI) to optical interferometry (OI).
In the context of RI, we propose a novel convex optimization approach for full polarization imaging relying on sparsity-promoting regularizations. Unlike standard RI
imaging algorithms, our method jointly solves for the Stokes images by enforcing the
polarization constraint, which imposes a physical dependency between the images.
These priors are shown to enhance the imaging quality via various performed numerical studies. The proposed imaging approach also benefits from its scalability to handle
the huge amounts of data expected from the new instruments. When it comes to deal
with the critical and challenging issues of the direction-dependent effects calibration,
we further propose a non-convex optimization technique that unifies calibration and
imaging steps in a global framework, in which we adapt the earlier developed imaging
method for the imaging step. In contrast to existing RI calibration modalities, our
method benefits from well-established convergence guarantees even in the non-convex
setting considered in this work and its efficiency is demonstrated through several
numerical experiments.
Last but not least, inspired by the performance of these methodologies and drawing
ideas from them, we aim to solve image recovery problem in OI that poses its own
set of challenges primarily due to the partial loss of phase information. To this end,
we propose a sparsity regularized non-convex optimization algorithm that is equipped
with convergence guarantees and is adaptable to both monochromatic and hyperspectral OI imaging. We validate it by presenting the simulation results
Fast restoration for out-of-focus blurred images of QR code with edge prior information via image sensing.
Out-of-focus blurring of the QR code is very common in mobile Internet systems, which often causes failure of authentication as a result of a misreading of the information hence adversely affects the operation of the system. To tackle this difficulty, this work firstly introduced an edge prior information, which is the average distance between the center point and the edge of the clear QR code images in the same batch. It is motivated by the theoretical analysis and the practical observation of the theory of CMOS image sensing, optics information, blur invariants, and the invariance of the center of the diffuse light spots. After obtaining the edge prior information, combining the iterative image and the center point of the binary image, the proposed method can accurately estimate the parameter of the out-of-focus blur kernel. Furthermore, we obtain the sharp image by Wiener filter, a non-blind image deblurring algorithm. By this, it avoids excessive redundant calculations. Experimental results validate that the proposed method has great practical utility in terms of deblurring quality, robustness, and computational efficiency, which is suitable for barcode application systems, e.g., warehouse, logistics, and automated production
- …