465 research outputs found
Multi-GPU maximum entropy image synthesis for radio astronomy
The maximum entropy method (MEM) is a well known deconvolution technique in
radio-interferometry. This method solves a non-linear optimization problem with
an entropy regularization term. Other heuristics such as CLEAN are faster but
highly user dependent. Nevertheless, MEM has the following advantages: it is
unsupervised, it has a statistical basis, it has a better resolution and better
image quality under certain conditions. This work presents a high performance
GPU version of non-gridding MEM, which is tested using real and simulated data.
We propose a single-GPU and a multi-GPU implementation for single and
multi-spectral data, respectively. We also make use of the Peer-to-Peer and
Unified Virtual Addressing features of newer GPUs which allows to exploit
transparently and efficiently multiple GPUs. Several ALMA data sets are used to
demonstrate the effectiveness in imaging and to evaluate GPU performance. The
results show that a speedup from 1000 to 5000 times faster than a sequential
version can be achieved, depending on data and image size. This allows to
reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 minutes,
instead of 2.5 days that takes a sequential version on CPU.Comment: 11 pages, 13 figure
Adaptive Real Time Imaging Synthesis Telescopes
The digital revolution is transforming astronomy from a data-starved to a
data-submerged science. Instruments such as the Atacama Large Millimeter Array
(ALMA), the Large Synoptic Survey Telescope (LSST), and the Square Kilometer
Array (SKA) will measure their accumulated data in petabytes. The capacity to
produce enormous volumes of data must be matched with the computing power to
process that data and produce meaningful results. In addition to handling huge
data rates, we need adaptive calibration and beamforming to handle atmospheric
fluctuations and radio frequency interference, and to provide a user
environment which makes the full power of large telescope arrays accessible to
both expert and non-expert users. Delayed calibration and analysis limit the
science which can be done. To make the best use of both telescope and human
resources we must reduce the burden of data reduction.
Our instrumentation comprises of a flexible correlator, beam former and
imager with digital signal processing closely coupled with a computing cluster.
This instrumentation will be highly accessible to scientists, engineers, and
students for research and development of real-time processing algorithms, and
will tap into the pool of talented and innovative students and visiting
scientists from engineering, computing, and astronomy backgrounds.
Adaptive real-time imaging will transform radio astronomy by providing
real-time feedback to observers. Calibration of the data is made in close to
real time using a model of the sky brightness distribution. The derived
calibration parameters are fed back into the imagers and beam formers. The
regions imaged are used to update and improve the a-priori model, which becomes
the final calibrated image by the time the observations are complete
Distributed and parallel sparse convex optimization for radio interferometry with PURIFY
Next generation radio interferometric telescopes are entering an era of big
data with extremely large data sets. While these telescopes can observe the sky
in higher sensitivity and resolution than before, computational challenges in
image reconstruction need to be overcome to realize the potential of
forthcoming telescopes. New methods in sparse image reconstruction and convex
optimization techniques (cf. compressive sensing) have shown to produce higher
fidelity reconstructions of simulations and real observations than traditional
methods. This article presents distributed and parallel algorithms and
implementations to perform sparse image reconstruction, with significant
practical considerations that are important for implementing these algorithms
for Big Data. We benchmark the algorithms presented, showing that they are
considerably faster than their serial equivalents. We then pre-sample gridding
kernels to scale the distributed algorithms to larger data sizes, showing
application times for 1 Gb to 2.4 Tb data sets over 25 to 100 nodes for up to
50 billion visibilities, and find that the run-times for the distributed
algorithms range from 100 milliseconds to 3 minutes per iteration. This work
presents an important step in working towards computationally scalable and
efficient algorithms and implementations that are needed to image observations
of both extended and compact sources from next generation radio interferometers
such as the SKA. The algorithms are implemented in the latest versions of the
SOPT (https://github.com/astro-informatics/sopt) and PURIFY
(https://github.com/astro-informatics/purify) software packages {(Versions
3.1.0)}, which have been released alongside of this article.Comment: 25 pages, 5 figure
Accelerated deconvolution of radio interferometric images using orthogonal matching pursuit and graphics hardware
Deconvolution of native radio interferometric images constitutes a major computational component of the radio astronomy imaging process. An efficient and robust deconvolution operation is essential for reconstruction of the true sky signal from measured correlator data. Traditionally, radio astronomers have mostly used the CLEAN algorithm, and variants thereof. However, the techniques of compressed sensing provide a mathematically rigorous framework within which deconvolution of radio interferometric images can be implemented. We present an accelerated implementation of the orthogonal matching pursuit (OMP) algorithm (a compressed sensing method) that makes use of graphics processing unit (GPU) hardware, and show significant accuracy improvements over the standard CLEAN. In particular, we show that OMP correctly identifies more sources than CLEAN, identifying up to 82% of the sources in 100 test images, while CLEAN only identifies up to 61% of the sources. In addition, the residual after source extraction is 2.7 times lower for OMP than for CLEAN. Furthermore, the GPU implementation of OMP performs around 23 times faster than a 4-core CPU
Regularized Maximum Likelihood Image Synthesis and Validation for ALMA Continuum Observations of Protoplanetary Disks
Regularized Maximum Likelihood (RML) techniques are a class of image
synthesis methods that achieve better angular resolution and image fidelity
than traditional methods like CLEAN for sub-mm interferometric observations. To
identify best practices for RML imaging, we used the GPU-accelerated open
source Python package MPoL, a machine learning-based RML approach, to explore
the influence of common RML regularizers (maximum entropy, sparsity, total
variation, and total squared variation) on images reconstructed from real and
synthetic ALMA continuum observations of protoplanetary disks. We tested two
different cross-validation (CV) procedures to characterize their performance
and determine optimal prior strengths, and found that CV over a coarse grid of
regularization strengths easily identifies a range of models with comparably
strong predictive power. To evaluate the performance of RML techniques against
a ground truth image, we used MPoL on a synthetic protoplanetary disk dataset
and found that RML methods successfully resolve structures at fine spatial
scales present in the original simulation. We used ALMA DSHARP observations of
the protoplanetary disk around HD 143006 to compare the performance of MPoL and
CLEAN, finding that RML imaging improved the spatial resolution of the image by
up to a factor of 3 without sacrificing sensitivity. We provide general
recommendations for building an RML workflow for image synthesis of ALMA
protoplanetary disk observations, including effective use of CV. Using these
techniques to improve the imaging resolution of protoplanetary disk
observations will enable new science, including the detection of protoplanets
embedded in disks.Comment: 27 pages, 12 figures, accepted for publication in PAS
Radio Astronomy Image Reconstruction in the Big Data Era
Next generation radio interferometric telescopes pave the way for the future of radio astronomy with extremely wide-fields of view and precision polarimetry not possible at other optical wavelengths, with the required cost of image reconstruction. These instruments will be used to map large scale Galactic and extra-galactic structures at higher resolution and fidelity than ever before. However, radio astronomy has entered the era of big data, limiting the expected sensitivity and fidelity of the instruments due to the large amounts of data. New image reconstruction methods are critical to meet the data requirements needed to obtain new scientific discoveries in radio astronomy. To meet this need, this work takes traditional radio astronomical imaging and introduces new of state-of-the-art image reconstruction frameworks of sparse image reconstruction algorithms. The software package PURIFY, developed in this work, uses convex optimization algorithms (i.e. alternating direction method of multipliers) to solve for the reconstructed image. We design, implement, and apply distributed radio interferometric image reconstruction methods for the message passing interface (MPI), showing that PURIFY scales to big data image reconstruction on computing clusters. We design a distributed wide-field imaging algorithm for non-coplanar arrays, while providing new theoretical insights for wide-field imaging. It is shown that PURIFY’s methods provide higher dynamic range than traditional image reconstruction methods, providing a more accurate and detailed sky model for real observations. This sets the stage for state-of-the-art image reconstruction methods to be distributed and applied to next generation interferometric telescopes, where they can be used to meet big data challenges and to make new scientific discoveries in radio astronomy and astrophysics
PyMORESANE: A Pythonic and CUDA-accelerated implementation of the MORESANE deconvolution algorithm
The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools
- …