525 research outputs found

    A GPU based real-time software correlation system for the Murchison Widefield Array prototype

    Full text link
    Modern graphics processing units (GPUs) are inexpensive commodity hardware that offer Tflop/s theoretical computing capacity. GPUs are well suited to many compute-intensive tasks including digital signal processing. We describe the implementation and performance of a GPU-based digital correlator for radio astronomy. The correlator is implemented using the NVIDIA CUDA development environment. We evaluate three design options on two generations of NVIDIA hardware. The different designs utilize the internal registers, shared memory and multiprocessors in different ways. We find that optimal performance is achieved with the design that minimizes global memory reads on recent generations of hardware. The GPU-based correlator outperforms a single-threaded CPU equivalent by a factor of 60 for a 32 antenna array, and runs on commodity PC hardware. The extra compute capability provided by the GPU maximises the correlation capability of a PC while retaining the fast development time associated with using standard hardware, networking and programming languages. In this way, a GPU-based correlation system represents a middle ground in design space between high performance, custom built hardware and pure CPU-based software correlation. The correlator was deployed at the Murchison Widefield Array 32 antenna prototype system where it ran in real-time for extended periods. We briefly describe the data capture, streaming and correlation system for the prototype array.Comment: 11 pages, to appear in PAS

    The lens and source of the optical Einstein ring gravitational lens ER 0047-2808

    Full text link
    (Abridged) We perform a detailed analysis of the optical gravitational lens ER 0047-2808 imaged with WFPC2 on the Hubble Space Telescope. Using software specifically designed for the analysis of resolved gravitational lens systems, we focus on how the image alone can constrain the mass distribution in the lens galaxy. We find the data are of sufficient quality to strongly constrain the lens model with no a priori assumptions about the source. Using a variety of mass models, we find statistically acceptable results for elliptical isothermal-like models with an Einstein radius of 1.17''. An elliptical power-law model (Sigma \propto R^-beta) for the surface mass density favours a slope slightly steeper than isothermal with beta = 1.08 +/- 0.03. Other models including a constant M/L, pure NFW halo and (surprisingly) an isothermal sphere with external shear are ruled out by the data. We find the galaxy light profile can only be fit with a Sersic plus point source model. The resulting total M/L_B contained within the images is 4.7 h_65 +/-0.3. In addition, we find the luminous matter is aligned with the total mass distribution within a few degrees. The source, reconstructed by the software, is revealed to have two bright regions, with an unresolved component inside the caustic and a resolved component straddling a fold caustic. The angular size of the entire source is approx. 0.1'' and its (unlensed) Lyman-alpha flux is 3 x 10^-17 erg/s/cm^2.Comment: 13 pages, 5 figures. Revised version accepted for publication in MNRA

    Direction-Dependent Polarised Primary Beams in Wide-Field Synthesis Imaging

    Full text link
    The process of wide-field synthesis imaging is explored, with the aim of understanding the implications of variable, polarised primary beams for forthcoming Epoch of Reionisation experiments. These experiments seek to detect weak signatures from redshifted 21cm emission in deep residual datasets, after suppression and subtraction of foreground emission. Many subtraction algorithms benefit from low side-lobes and polarisation leakage at the outset, and both of these are intimately linked to how the polarised primary beams are handled. Building on previous contributions from a number of authors, in which direction-dependent corrections are incorporated into visibility gridding kernels, we consider the special characteristics of arrays of fixed dipole antennas operating around 100-200 MHz, looking towards instruments such as the Square Kilometre Array (SKA) and the Hydrogen Epoch of Reionization Arrays (HERA). We show that integrating snapshots in the image domain can help to produce compact gridding kernels, and also reduce the need to make complicated polarised leakage corrections during gridding. We also investigate an alternative form for the gridding kernel that can suppress variations in the direction-dependent weighting of gridded visibilities by 10s of dB, while maintaining compact support.Comment: 15 pages, 4 figures. Accepted for publication in JA

    Spectral ageing in the era of big data : Integrated versus resolved models

    Get PDF
    This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society. © 2017 The Author(s). Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved.Continuous injection models of spectral ageing have long been used to determine the age of radio galaxies from their integrated spectrum; however, many questions about their reliability remain unanswered. With various large area surveys imminent (e.g. LOw Frequency ARray, MeerKAT, MurchisonWidefield Array) and planning for the next generation of radio interferometers are well underway (e.g. next generationVLA, SquareKilometreArray), investigations of radio galaxy physics are set to shift away from studies of individual sources to the population as a whole. Determining if and how integrated models of spectral ageing can be applied in the era of big data is therefore crucial. In this paper, I compare classical integrated models of spectral ageing to recent well-resolved studies that use modern analysis techniques on small spatial scales to determine their robustness and validity as a source selection method. I find that integrated models are unable to recover key parameters and, even when known a priori, provide a poor, frequency-dependent description of a source's spectrum. I show a disparity of up to a factor of 6 in age between the integrated and resolved methods but suggest, even with these inconsistencies, such models still provide a potential method of candidate selection in the search for remnant radio galaxies and in providing a cleaner selection of high redshift radio galaxies in z - α selected samples.Peer reviewe

    Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope Array with GPUs

    Get PDF
    The Murchison Widefield Array (MWA) is a next-generation radio telescope currently under construction in the remote Western Australia Outback. Raw data will be generated continuously at 5GiB/s, grouped into 8s cadences. This high throughput motivates the development of on-site, real time processing and reduction in preference to archiving, transport and off-line processing. Each batch of 8s data must be completely reduced before the next batch arrives. Maintaining real time operation will require a sustained performance of around 2.5TFLOP/s (including convolutions, FFTs, interpolations and matrix multiplications). We describe a scalable heterogeneous computing pipeline implementation, exploiting both the high computing density and FLOP-per-Watt ratio of modern GPUs. The architecture is highly parallel within and across nodes, with all major processing elements performed by GPUs. Necessary scatter-gather operations along the pipeline are loosely synchronized between the nodes hosting the GPUs. The MWA will be a frontier scientific instrument and a pathfinder for planned peta- and exascale facilities.Comment: Version accepted by Comp. Phys. Com

    A VLBA search for binary black holes in active galactic nuclei with double-peaked optical emission line spectra

    Full text link
    We have examined a subset of 11 active galactic nuclei (AGN) drawn from a sample of 87 objects that possess double-peaked optical emission line spectra, as put forward by Wang et al. (2009a) and are detectable in the FIRST survey at radio wavelengths. The double-peaked nature of the optical emission line spectra has been suggested as evidence for the existence of binary black holes in these AGN, although this interpretation is controversial. We make a simple suggestion, that direct evidence of binary black holes in these objects could be searched for in the form of dual sources of compact radio emission associated with the AGN. To explore this idea, we have used the Very Long Baseline Array to observe these 11 objects from the Wang et al. (2009a) sample. Of the 11 objects, we detect compact radio emission from two, SDSS J151709+335324 and SDSS J160024+264035. Both objects show single components of compact radio emission. The morphology of SDSS J151709+335324 is consistent with a recent comprehensive multi-wavelength study of this object by Rosario et al. (2010). Assuming that the entire sample consists of binary black holes, we would expect of order one double radio core to be detected, based on radio wavelength detection rates from FIRST and VLBI surveys. We have not detected any double cores, thus this work does not substantially support the idea that AGN with double-peaked optical emission lines contain binary black holes. However, the study of larger samples should be undertaken to provide a more secure statistical result, given the estimated detection rates.Comment: 14 pages, 3 figures. To appear in A

    DiFX2: A more flexible, efficient, robust and powerful software correlator

    Get PDF
    Software correlation, where a correlation algorithm written in a high-level language such as C++ is run on commodity computer hardware, has become increasingly attractive for small to medium sized and/or bandwidth constrained radio interferometers. In particular, many long baseline arrays (which typically have fewer than 20 elements and are restricted in observing bandwidth by costly recording hardware and media) have utilized software correlators for rapid, cost-effective correlator upgrades to allow compatibility with new, wider bandwidth recording systems and improve correlator flexibility. The DiFX correlator, made publicly available in 2007, has been a popular choice in such upgrades and is now used for production correlation by a number of observatories and research groups worldwide. Here we describe the evolution in the capabilities of the DiFX correlator over the past three years, including a number of new capabilities, substantial performance improvements, and a large amount of supporting infrastructure to ease use of the code. New capabilities include the ability to correlate a large number of phase centers in a single correlation pass, the extraction of phase calibration tones, correlation of disparate but overlapping sub-bands, the production of rapidly sampled filterbank and kurtosis data at minimal cost, and many more. The latest version of the code is at least 15% faster than the original, and in certain situations many times this value. Finally, we also present detailed test results validating the correctness of the new code.Comment: 28 pages, 9 figures, accepted for publication in PAS

    Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    Full text link
    General purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplyfing the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best-practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks, and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.Comment: 13 pages, 5 figures, accepted for publication in PAS

    When Darwin Met Einstein: Gravitational Lens Inversion with Genetic Algorithms

    Full text link
    Gravitational lensing can magnify a distant source, revealing structural detail which is normally unresolvable. Recovering this detail through an inversion of the influence of gravitational lensing, however, requires optimisation of not only lens parameters, but also of the surface brightness distribution of the source. This paper outlines a new approach to this inversion, utilising genetic algorithms to reconstruct the source profile. In this initial study, the effects of image degradation due to instrumental and atmospheric effects are neglected and it is assumed that the lens model is accurately known, but the genetic algorithm approach can be incorporated into more general optimisation techniques, allowing the optimisation of both the parameters for a lensing model and the surface brightness of the source.Comment: 9 pages, to appear in PAS
    corecore