147 research outputs found

    Precision requirements for interferometric gridding in the analysis of a 21 cm power spectrum

    Get PDF
    Context. Experiments that try to observe the 21 cm redshifted signals from the epoch of reionisation (EoR) using interferometric low-frequency instruments have stringent requirements on the processing accuracy. Aims. We analyse the accuracy of radio interferometric gridding of visibilities with the aim to quantify the power spectrum bias caused by gridding. We do this ultimately to determine the suitability of different imaging algorithms and gridding settings for an analysis of a 21 cm power spectrum. Methods. We simulated realistic Low-Frequency Array (LOFAR) data and constructed power spectra with convolutional gridding and w stacking, w projection, image-domain gridding, and without w correction. These were compared against data that were directly Fourier transformed. The influence of oversampling, kernel size, w-quantization, kernel windowing function, and image padding were quantified. The gridding excess power was measured with a foreground subtraction strategy, for which foregrounds were subtracted using Gaussian progress regression, as well as with a foreground avoidance strategy. Results. Constructing a power spectrum with a significantly lower bias than the expected EoR signals is possible with the methods we tested, but requires a kernel oversampling factor of at least 4000, and when w-correction is used, at least 500 w-quantization levels. These values are higher than typically used values for imaging, but they are computationally feasible. The kernel size and padding factor parameters are less crucial. Of the tested methods, image-domain gridding shows the highest accuracy with the lowest imaging time. Conclusions. LOFAR 21 cm power spectrum results are not affected by gridding. Image-domain gridding is overall the most suitable algorithm for 21 cm EoR power spectrum experiments, including for future analyses of data from the Square Kilometre Array (SKA) EoR. Nevertheless, convolutional gridding with tuned parameters results in sufficient accuracy for interferometric 21 cm EoR experiments. This also holds for w stacking for wide-field imaging. The w-projection algorithm is less suitable because of the requirements for kernel oversampling, and a faceting approach is unsuitable because it causes spatial discontinuities

    Recovering piecewise smooth functions from nonuniform Fourier measurements

    Full text link
    In this paper, we consider the problem of reconstructing piecewise smooth functions to high accuracy from nonuniform samples of their Fourier transform. We use the framework of nonuniform generalized sampling (NUGS) to do this, and to ensure high accuracy we employ reconstruction spaces consisting of splines or (piecewise) polynomials. We analyze the relation between the dimension of the reconstruction space and the bandwidth of the nonuniform samples, and show that it is linear for splines and piecewise polynomials of fixed degree, and quadratic for piecewise polynomials of varying degree

    A Millisecond Interferometric Search for Fast Radio Bursts with the Very Large Array

    Full text link
    We report on the first millisecond timescale radio interferometric search for the new class of transient known as fast radio bursts (FRBs). We used the Very Large Array (VLA) for a 166-hour, millisecond imaging campaign to detect and precisely localize an FRB. We observed at 1.4 GHz and produced visibilities with 5 ms time resolution over 256 MHz of bandwidth. Dedispersed images were searched for transients with dispersion measures from 0 to 3000 pc/cm3. No transients were detected in observations of high Galactic latitude fields taken from September 2013 though October 2014. Observations of a known pulsar show that images typically had a thermal-noise limited sensitivity of 120 mJy/beam (8 sigma; Stokes I) in 5 ms and could detect and localize transients over a wide field of view. Our nondetection limits the FRB rate to less than 7e4/sky/day (95% confidence) above a fluence limit of 1.2 Jy-ms. Assuming a Euclidean flux distribution, the VLA rate limit is inconsistent with the published rate of Thornton et al. We recalculate previously published rates with a homogeneous consideration of the effects of primary beam attenuation, dispersion, pulse width, and sky brightness. This revises the FRB rate downward and shows that the VLA observations had a roughly 60% chance of detecting a typical FRB and that a 95% confidence constraint would require roughly 500 hours of similar VLA observing. Our survey also limits the repetition rate of an FRB to 2 times less than any known repeating millisecond radio transient.Comment: Submitted to ApJ. 13 pages, 9 figure

    Radio Astronomy Image Reconstruction in the Big Data Era

    Get PDF
    Next generation radio interferometric telescopes pave the way for the future of radio astronomy with extremely wide-fields of view and precision polarimetry not possible at other optical wavelengths, with the required cost of image reconstruction. These instruments will be used to map large scale Galactic and extra-galactic structures at higher resolution and fidelity than ever before. However, radio astronomy has entered the era of big data, limiting the expected sensitivity and fidelity of the instruments due to the large amounts of data. New image reconstruction methods are critical to meet the data requirements needed to obtain new scientific discoveries in radio astronomy. To meet this need, this work takes traditional radio astronomical imaging and introduces new of state-of-the-art image reconstruction frameworks of sparse image reconstruction algorithms. The software package PURIFY, developed in this work, uses convex optimization algorithms (i.e. alternating direction method of multipliers) to solve for the reconstructed image. We design, implement, and apply distributed radio interferometric image reconstruction methods for the message passing interface (MPI), showing that PURIFY scales to big data image reconstruction on computing clusters. We design a distributed wide-field imaging algorithm for non-coplanar arrays, while providing new theoretical insights for wide-field imaging. It is shown that PURIFY’s methods provide higher dynamic range than traditional image reconstruction methods, providing a more accurate and detailed sky model for real observations. This sets the stage for state-of-the-art image reconstruction methods to be distributed and applied to next generation interferometric telescopes, where they can be used to meet big data challenges and to make new scientific discoveries in radio astronomy and astrophysics

    Cygnus A super-resolved via convex optimisation from VLA data

    Get PDF
    We leverage the Sparsity Averaging Reweighted Analysis (SARA) approach for interferometric imaging, that is based on convex optimisation, for the super-resolution of Cyg A from observations at the frequencies 8.422GHz and 6.678GHz with the Karl G. Jansky Very Large Array (VLA). The associated average sparsity and positivity priors enable image reconstruction beyond instrumental resolution. An adaptive Preconditioned Primal-Dual algorithmic structure is developed for imaging in the presence of unknown noise levels and calibration errors. We demonstrate the superior performance of the algorithm with respect to the conventional CLEAN-based methods, reflected in super-resolved images with high fidelity. The high resolution features of the recovered images are validated by referring to maps of Cyg A at higher frequencies, more precisely 17.324GHz and 14.252GHz. We also confirm the recent discovery of a radio transient in Cyg A, revealed in the recovered images of the investigated data sets. Our matlab code is available online on GitHub.Comment: 14 pages, 7 figures (3/7 animated figures), accepted for publication in MNRA

    3D Detection and Characterisation of ALMA Sources through Deep Learning

    Full text link
    We present a Deep-Learning (DL) pipeline developed for the detection and characterization of astronomical sources within simulated Atacama Large Millimeter/submillimeter Array (ALMA) data cubes. The pipeline is composed of six DL models: a Convolutional Autoencoder for source detection within the spatial domain of the integrated data cubes, a Recurrent Neural Network (RNN) for denoising and peak detection within the frequency domain, and four Residual Neural Networks (ResNets) for source characterization. The combination of spatial and frequency information improves completeness while decreasing spurious signal detection. To train and test the pipeline, we developed a simulation algorithm able to generate realistic ALMA observations, i.e. both sky model and dirty cubes. The algorithm simulates always a central source surrounded by fainter ones scattered within the cube. Some sources were spatially superimposed in order to test the pipeline deblending capabilities. The detection performances of the pipeline were compared to those of other methods and significant improvements in performances were achieved. Source morphologies are detected with subpixel accuracies obtaining mean residual errors of 10−310^{-3} pixel (0.10.1 mas) and 10−110^{-1} mJy/beam on positions and flux estimations, respectively. Projection angles and flux densities are also recovered within 10%10\% of the true values for 80%80\% and 73%73\% of all sources in the test set, respectively. While our pipeline is fine-tuned for ALMA data, the technique is applicable to other interferometric observatories, as SKA, LOFAR, VLBI, and VLTI
    • …
    corecore