147 research outputs found
Precision requirements for interferometric gridding in the analysis of a 21 cm power spectrum
Context. Experiments that try to observe the 21 cm redshifted signals from the epoch of reionisation (EoR) using interferometric low-frequency instruments have stringent requirements on the processing accuracy.
Aims. We analyse the accuracy of radio interferometric gridding of visibilities with the aim to quantify the power spectrum bias caused by gridding. We do this ultimately to determine the suitability of different imaging algorithms and gridding settings for an analysis of a 21 cm power spectrum.
Methods. We simulated realistic Low-Frequency Array (LOFAR) data and constructed power spectra with convolutional gridding and w stacking, w projection, image-domain gridding, and without w correction. These were compared against data that were directly Fourier transformed. The influence of oversampling, kernel size, w-quantization, kernel windowing function, and image padding were quantified. The gridding excess power was measured with a foreground subtraction strategy, for which foregrounds were subtracted using Gaussian progress regression, as well as with a foreground avoidance strategy.
Results. Constructing a power spectrum with a significantly lower bias than the expected EoR signals is possible with the methods we tested, but requires a kernel oversampling factor of at least 4000, and when w-correction is used, at least 500 w-quantization levels. These values are higher than typically used values for imaging, but they are computationally feasible. The kernel size and padding factor parameters are less crucial. Of the tested methods, image-domain gridding shows the highest accuracy with the lowest imaging time.
Conclusions. LOFAR 21 cm power spectrum results are not affected by gridding. Image-domain gridding is overall the most suitable algorithm for 21 cm EoR power spectrum experiments, including for future analyses of data from the Square Kilometre Array (SKA) EoR. Nevertheless, convolutional gridding with tuned parameters results in sufficient accuracy for interferometric 21 cm EoR experiments. This also holds for w stacking for wide-field imaging. The w-projection algorithm is less suitable because of the requirements for kernel oversampling, and a faceting approach is unsuitable because it causes spatial discontinuities
Recovering piecewise smooth functions from nonuniform Fourier measurements
In this paper, we consider the problem of reconstructing piecewise smooth
functions to high accuracy from nonuniform samples of their Fourier transform.
We use the framework of nonuniform generalized sampling (NUGS) to do this, and
to ensure high accuracy we employ reconstruction spaces consisting of splines
or (piecewise) polynomials. We analyze the relation between the dimension of
the reconstruction space and the bandwidth of the nonuniform samples, and show
that it is linear for splines and piecewise polynomials of fixed degree, and
quadratic for piecewise polynomials of varying degree
A Millisecond Interferometric Search for Fast Radio Bursts with the Very Large Array
We report on the first millisecond timescale radio interferometric search for
the new class of transient known as fast radio bursts (FRBs). We used the Very
Large Array (VLA) for a 166-hour, millisecond imaging campaign to detect and
precisely localize an FRB. We observed at 1.4 GHz and produced visibilities
with 5 ms time resolution over 256 MHz of bandwidth. Dedispersed images were
searched for transients with dispersion measures from 0 to 3000 pc/cm3. No
transients were detected in observations of high Galactic latitude fields taken
from September 2013 though October 2014. Observations of a known pulsar show
that images typically had a thermal-noise limited sensitivity of 120 mJy/beam
(8 sigma; Stokes I) in 5 ms and could detect and localize transients over a
wide field of view. Our nondetection limits the FRB rate to less than
7e4/sky/day (95% confidence) above a fluence limit of 1.2 Jy-ms. Assuming a
Euclidean flux distribution, the VLA rate limit is inconsistent with the
published rate of Thornton et al. We recalculate previously published rates
with a homogeneous consideration of the effects of primary beam attenuation,
dispersion, pulse width, and sky brightness. This revises the FRB rate downward
and shows that the VLA observations had a roughly 60% chance of detecting a
typical FRB and that a 95% confidence constraint would require roughly 500
hours of similar VLA observing. Our survey also limits the repetition rate of
an FRB to 2 times less than any known repeating millisecond radio transient.Comment: Submitted to ApJ. 13 pages, 9 figure
Radio Astronomy Image Reconstruction in the Big Data Era
Next generation radio interferometric telescopes pave the way for the future of radio astronomy with extremely wide-fields of view and precision polarimetry not possible at other optical wavelengths, with the required cost of image reconstruction. These instruments will be used to map large scale Galactic and extra-galactic structures at higher resolution and fidelity than ever before. However, radio astronomy has entered the era of big data, limiting the expected sensitivity and fidelity of the instruments due to the large amounts of data. New image reconstruction methods are critical to meet the data requirements needed to obtain new scientific discoveries in radio astronomy. To meet this need, this work takes traditional radio astronomical imaging and introduces new of state-of-the-art image reconstruction frameworks of sparse image reconstruction algorithms. The software package PURIFY, developed in this work, uses convex optimization algorithms (i.e. alternating direction method of multipliers) to solve for the reconstructed image. We design, implement, and apply distributed radio interferometric image reconstruction methods for the message passing interface (MPI), showing that PURIFY scales to big data image reconstruction on computing clusters. We design a distributed wide-field imaging algorithm for non-coplanar arrays, while providing new theoretical insights for wide-field imaging. It is shown that PURIFY’s methods provide higher dynamic range than traditional image reconstruction methods, providing a more accurate and detailed sky model for real observations. This sets the stage for state-of-the-art image reconstruction methods to be distributed and applied to next generation interferometric telescopes, where they can be used to meet big data challenges and to make new scientific discoveries in radio astronomy and astrophysics
Recommended from our members
Interferometric Methods
Future radio telescopes promise great advances in resolution and sensitivity. These
include the Square Kilometer Array, a two array instrument, in South Africa and Australia. Similarly, the next
generation Very Large Array (ngVLA) is being designed for construction in
North America. These arrays all promise exceptional advances in sensitivity,
angular resolution, and survey speed. The SKA and ngVLA are both specified to
have sensitivities at the level of Jy's. The SKA-Low instrument will consist
of a huge number of dipoles antennas in Australia which is pushing the bounds of
current FX correlator technology with scaling, where is the
number of antennas. The design proposals for these instruments include a dense
core of antennas, necessitating advances in imaging methods for these very
dense cores versus more traditionally sparse instruments.
Another ambitious experiment is the Hydrogen Epoch of Reionisation Array (HERA) in
South Africa which hopes to make the first direct detection of the Epoch of Reionisation
through the red-shifted H{\sc i} signal
which is a factor of smaller than the thermal-like noise.
In this thesis, these problems are tackled by re-examining the underlying
principles of interferometry. The first working
example of a direct imaging correlator is presented which allows images to be
formed directly from the voltages off each antenna in a dense array, without an
expensive cross-correlation operation as is typically required. A detailed discussion
is given of how standard steps in interferometric imaging differ in this new
scheme, including calibration. Additionally the first wide field direct imaging
correlator is presented, which allows the problems of non-coplanarity to be
dealt with for both sparse and dense arrays in a very efficient manner on modern GPU compute hardware. These are, to the best of the authors knowledge, the only working implementations of
a direct imaging correlator for generic arrays with no restrictions on the geometry of the
array or homogeneity of constituent receiver elements. These new approaches have been published
in the scientific literature as discussed in the Declaration.
Moving on from this, the closure phase bispectrum is presented as a way of uncovering
the cosmological Epoch of Reionisation signal from the H{\sc i} line. This is using the
HERA telescope, which consists of a dense core of parabolic antennas in a highly redundant layout.
A data reduction and processing pipeline for the HERA telescope is constructed and presented, for use with the
bispectrum. Initial results towards a cosmologial limit are reported.
The HERA telescope relies on redundancy in its antenna elements for its calibration
and measurement strategy. The bispectrum with its unique mathematical propeties, in combination with forward modelling, is shown to be a
potent tool for probing departures from the assumed reudundancy. It is shown, through
this method, that HERA
suffers significant direction-dependent non-redundancies in the dataset used for our analysis,
which are extremely difficult to calibrate out.
Finally, the problem of wide-field imaging in next generation arrays is tackled
through the development and implementation of a new scheme of wide field
imaging. This uses a new method of parallelising the
problem of wide-field imaging, and is intended for use with the very large
datasets that will be produced by upcoming instruments. Two schemes are introduced: -towers, and
Improved -towers. The latter generalises the former in combination with
advances in optimal convolution theory for the radio astronomy ``gridding'' problem.
The theory behind this approach is explored, and a high performance implementation is presented for
-towers and Improved -stacking within Improved -towers.ARM Ltd iCase Sponsorshi
Cygnus A super-resolved via convex optimisation from VLA data
We leverage the Sparsity Averaging Reweighted Analysis (SARA) approach for
interferometric imaging, that is based on convex optimisation, for the
super-resolution of Cyg A from observations at the frequencies 8.422GHz and
6.678GHz with the Karl G. Jansky Very Large Array (VLA). The associated average
sparsity and positivity priors enable image reconstruction beyond instrumental
resolution. An adaptive Preconditioned Primal-Dual algorithmic structure is
developed for imaging in the presence of unknown noise levels and calibration
errors. We demonstrate the superior performance of the algorithm with respect
to the conventional CLEAN-based methods, reflected in super-resolved images
with high fidelity. The high resolution features of the recovered images are
validated by referring to maps of Cyg A at higher frequencies, more precisely
17.324GHz and 14.252GHz. We also confirm the recent discovery of a radio
transient in Cyg A, revealed in the recovered images of the investigated data
sets. Our matlab code is available online on GitHub.Comment: 14 pages, 7 figures (3/7 animated figures), accepted for publication
in MNRA
3D Detection and Characterisation of ALMA Sources through Deep Learning
We present a Deep-Learning (DL) pipeline developed for the detection and
characterization of astronomical sources within simulated Atacama Large
Millimeter/submillimeter Array (ALMA) data cubes. The pipeline is composed of
six DL models: a Convolutional Autoencoder for source detection within the
spatial domain of the integrated data cubes, a Recurrent Neural Network (RNN)
for denoising and peak detection within the frequency domain, and four Residual
Neural Networks (ResNets) for source characterization. The combination of
spatial and frequency information improves completeness while decreasing
spurious signal detection. To train and test the pipeline, we developed a
simulation algorithm able to generate realistic ALMA observations, i.e. both
sky model and dirty cubes. The algorithm simulates always a central source
surrounded by fainter ones scattered within the cube. Some sources were
spatially superimposed in order to test the pipeline deblending capabilities.
The detection performances of the pipeline were compared to those of other
methods and significant improvements in performances were achieved. Source
morphologies are detected with subpixel accuracies obtaining mean residual
errors of pixel ( mas) and mJy/beam on positions and
flux estimations, respectively. Projection angles and flux densities are also
recovered within of the true values for and of all sources
in the test set, respectively. While our pipeline is fine-tuned for ALMA data,
the technique is applicable to other interferometric observatories, as SKA,
LOFAR, VLBI, and VLTI
- …