14,085 research outputs found
LOFAR Sparse Image Reconstruction
Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital
phased array interferometer with multiple antennas distributed in Europe. It
provides discrete sets of Fourier components of the sky brightness. Recovering
the original brightness distribution with aperture synthesis forms an inverse
problem that can be solved by various deconvolution and minimization methods
Aims. Recent papers have established a clear link between the discrete nature
of radio interferometry measurement and the "compressed sensing" (CS) theory,
which supports sparse reconstruction methods to form an image from the measured
visibilities. Empowered by proximal theory, CS offers a sound framework for
efficient global minimization and sparse data representation using fast
algorithms. Combined with instrumental direction-dependent effects (DDE) in the
scope of a real instrument, we developed and validated a new method based on
this framework Methods. We implemented a sparse reconstruction method in the
standard LOFAR imaging tool and compared the photometric and resolution
performance of this new imager with that of CLEAN-based methods (CLEAN and
MS-CLEAN) with simulated and real LOFAR data Results. We show that i) sparse
reconstruction performs as well as CLEAN in recovering the flux of point
sources; ii) performs much better on extended objects (the root mean square
error is reduced by a factor of up to 10); and iii) provides a solution with an
effective angular resolution 2-3 times better than the CLEAN images.
Conclusions. Sparse recovery gives a correct photometry on high dynamic and
wide-field images and improved realistic structures of extended sources (of
simulated and real LOFAR datasets). This sparse reconstruction method is
compatible with modern interferometric imagers that handle DDE corrections (A-
and W-projections) required for current and future instruments such as LOFAR
and SKAComment: Published in A&A, 19 pages, 9 figure
ADAM: a general method for using various data types in asteroid reconstruction
We introduce ADAM, the All-Data Asteroid Modelling algorithm. ADAM is simple
and universal since it handles all disk-resolved data types (adaptive optics or
other images, interferometry, and range-Doppler radar data) in a uniform manner
via the 2D Fourier transform, enabling fast convergence in model optimization.
The resolved data can be combined with disk-integrated data (photometry). In
the reconstruction process, the difference between each data type is only a few
code lines defining the particular generalized projection from 3D onto a 2D
image plane. Occultation timings can be included as sparse silhouettes, and
thermal infrared data are efficiently handled with an approximate algorithm
that is sufficient in practice due to the dominance of the high-contrast
(boundary) pixels over the low-contrast (interior) ones. This is of particular
importance to the raw ALMA data that can be directly handled by ADAM without
having to construct the standard image. We study the reliability of the
inversion by using the independent shape supports of function series and
control-point surfaces. When other data are lacking, one can carry out fast
nonconvex lightcurve-only inversion, but any shape models resulting from it
should only be taken as illustrative global-scale ones.Comment: 11 pages, submitted to A&
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Learned Interferometric Imaging for the SPIDER Instrument
The Segmented Planar Imaging Detector for Electro-Optical Reconnaissance
(SPIDER) is an optical interferometric imaging device that aims to offer an
alternative to the large space telescope designs of today with reduced size,
weight and power consumption. This is achieved through interferometric imaging.
State-of-the-art methods for reconstructing images from interferometric
measurements adopt proximal optimization techniques, which are computationally
expensive and require handcrafted priors. In this work we present two
data-driven approaches for reconstructing images from measurements made by the
SPIDER instrument. These approaches use deep learning to learn prior
information from training data, increasing the reconstruction quality, and
significantly reducing the computation time required to recover images by
orders of magnitude. Reconstruction time is reduced to
milliseconds, opening up the possibility of real-time imaging with SPIDER for
the first time. Furthermore, we show that these methods can also be applied in
domains where training data is scarce, such as astronomical imaging, by
leveraging transfer learning from domains where plenty of training data are
available.Comment: 21 pages, 14 figure
- …