343 research outputs found
DiFX2: A more flexible, efficient, robust and powerful software correlator
Software correlation, where a correlation algorithm written in a high-level
language such as C++ is run on commodity computer hardware, has become
increasingly attractive for small to medium sized and/or bandwidth constrained
radio interferometers. In particular, many long baseline arrays (which
typically have fewer than 20 elements and are restricted in observing bandwidth
by costly recording hardware and media) have utilized software correlators for
rapid, cost-effective correlator upgrades to allow compatibility with new,
wider bandwidth recording systems and improve correlator flexibility. The DiFX
correlator, made publicly available in 2007, has been a popular choice in such
upgrades and is now used for production correlation by a number of
observatories and research groups worldwide. Here we describe the evolution in
the capabilities of the DiFX correlator over the past three years, including a
number of new capabilities, substantial performance improvements, and a large
amount of supporting infrastructure to ease use of the code. New capabilities
include the ability to correlate a large number of phase centers in a single
correlation pass, the extraction of phase calibration tones, correlation of
disparate but overlapping sub-bands, the production of rapidly sampled
filterbank and kurtosis data at minimal cost, and many more. The latest version
of the code is at least 15% faster than the original, and in certain situations
many times this value. Finally, we also present detailed test results
validating the correctness of the new code.Comment: 28 pages, 9 figures, accepted for publication in PAS
A VLBA search for binary black holes in active galactic nuclei with double-peaked optical emission line spectra
We have examined a subset of 11 active galactic nuclei (AGN) drawn from a
sample of 87 objects that possess double-peaked optical emission line spectra,
as put forward by Wang et al. (2009a) and are detectable in the FIRST survey at
radio wavelengths. The double-peaked nature of the optical emission line
spectra has been suggested as evidence for the existence of binary black holes
in these AGN, although this interpretation is controversial. We make a simple
suggestion, that direct evidence of binary black holes in these objects could
be searched for in the form of dual sources of compact radio emission
associated with the AGN. To explore this idea, we have used the Very Long
Baseline Array to observe these 11 objects from the Wang et al. (2009a) sample.
Of the 11 objects, we detect compact radio emission from two, SDSS
J151709+335324 and SDSS J160024+264035. Both objects show single components of
compact radio emission. The morphology of SDSS J151709+335324 is consistent
with a recent comprehensive multi-wavelength study of this object by Rosario et
al. (2010). Assuming that the entire sample consists of binary black holes, we
would expect of order one double radio core to be detected, based on radio
wavelength detection rates from FIRST and VLBI surveys. We have not detected
any double cores, thus this work does not substantially support the idea that
AGN with double-peaked optical emission lines contain binary black holes.
However, the study of larger samples should be undertaken to provide a more
secure statistical result, given the estimated detection rates.Comment: 14 pages, 3 figures. To appear in A
Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters
General purpose computing on graphics processing units (GPGPU) is
dramatically changing the landscape of high performance computing in astronomy.
In this paper, we identify and investigate several key decision areas, with a
goal of simplyfing the early adoption of GPGPU in astronomy. We consider the
merits of OpenCL as an open standard in order to reduce risks associated with
coding in a native, vendor-specific programming environment, and present a GPU
programming philosophy based on using brute force solutions. We assert that
effective use of new GPU-based supercomputing facilities will require a change
in approach from astronomers. This will likely include improved programming
training, an increased need for software development best-practice through the
use of profiling and related optimisation tools, and a greater reliance on
third-party code libraries. As with any new technology, those willing to take
the risks, and make the investment of time and effort to become early adopters
of GPGPU in astronomy, stand to reap great benefits.Comment: 13 pages, 5 figures, accepted for publication in PAS
On the Detectability of the Hydrogen 3-cm Fine Structure Line from the EoR
A soft ultraviolet radiation field, 10.2 eV < E <13.6 eV, that permeates
neutral intergalactic gas during the Epoch of Reionization (EoR) excites the 2p
(directly) and 2s (indirectly) states of atomic hydrogen. Because the 2s state
is metastable, the lifetime of atoms in this level is relatively long, which
may cause the 2s state to be overpopulated relative to the 2p state. It has
recently been proposed that for this reason, neutral intergalactic atomic
hydrogen gas may be detected in absorption in its 3-cm fine-structure line
(2s_1/2 -> 2p_3/2) against the Cosmic Microwave Background out to very high
redshifts. In particular, the optical depth in the fine-structure line through
neutral intergalactic gas surrounding bright quasars during the EoR may reach
tau~1e-5. The resulting surface brightness temperature of tens of micro K (in
absorption) may be detectable with existing radio telescopes. Motivated by this
exciting proposal, we perform a detailed analysis of the transfer of Lyman
beta,gamma,delta,... radiation, and re-analyze the detectability of the
fine-structure line in neutral intergalactic gas surrounding high-redshift
quasars. We find that proper radiative transfer modeling causes the
fine-structure absorption signature to be reduced tremendously to tau< 1e-10.
We therefore conclude that neutral intergalactic gas during the EoR cannot
reveal its presence in the 3-cm fine-structure line to existing radio
telescopes.Comment: 7 pages, 4 figures, MNRAS in press; v2. some typos fixe
Analysing Astronomy Algorithms for GPUs and Beyond
Astronomy depends on ever increasing computing power. Processor clock-rates
have plateaued, and increased performance is now appearing in the form of
additional processor cores on a single chip. This poses significant challenges
to the astronomy software community. Graphics Processing Units (GPUs), now
capable of general-purpose computation, exemplify both the difficult
learning-curve and the significant speedups exhibited by massively-parallel
hardware architectures. We present a generalised approach to tackling this
paradigm shift, based on the analysis of algorithms. We describe a small
collection of foundation algorithms relevant to astronomy and explain how they
may be used to ease the transition to massively-parallel computing
architectures. We demonstrate the effectiveness of our approach by applying it
to four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for
gravitational lensing, pulsar dedispersion and volume rendering. Algorithms
with well-defined memory access patterns and high arithmetic intensity stand to
receive the greatest performance boost from massively-parallel architectures,
while those that involve a significant amount of decision-making may struggle
to take advantage of the available processing power.Comment: 10 pages, 3 figures, accepted for publication in MNRA
Subtraction of Bright Point Sources from Synthesis Images of the Epoch of Reionization
Bright point sources associated with extragalactic AGN and radio galaxies are
an important foreground for low frequency radio experiments aimed at detecting
the redshifted 21cm emission from neutral hydrogen during the epoch of
reionization. The frequency dependence of the synthesized beam implies that the
sidelobes of these sources will move across the field of view as a function of
observing frequency, hence frustrating line-of-sight foreground subtraction
techniques. We describe a method for subtracting these point sources from dirty
maps produced by an instrument such as the MWA. This technique combines matched
filters with an iterative centroiding scheme to locate and characterize point
sources in the presence of a diffuse background. Simulations show that this
technique can improve the dynamic range of EOR maps by 2-3 orders of magnitude.Comment: 11 pages, 8 figures, 1 table, submitted to PAS
Real-Time Adaptive Event Detection in Astronomical Data Streams
A new generation of observational science instruments is dramatically increasing collected data volumes in a range of fields. These instruments include the Square Kilometer Array (SKA), Large Synoptic Survey Telescope (LSST), terrestrial sensor networks, and NASA satellites participating in "decadal survey"' missions. Their unprecedented coverage and sensitivity will likely reveal wholly new categories of unexpected and transient events. Commensal methods passively analyze these data streams, recognizing anomalous events of scientific interest and reacting in real time. Here, the authors report on a case example: Very Long Baseline Array Fast Transients Experiment (V-FASTR), an ongoing commensal experiment at the Very Long Baseline Array (VLBA) that uses online adaptive pattern recognition to search for anomalous fast radio transients. V-FASTR triages a millisecond-resolution stream of data and promotes candidate anomalies for further offline analysis. It tunes detection parameters in real time, injecting synthetic events to continually retrain itself for optimum performance. This self-tuning approach retains sensitivity to weak signals while adapting to changing instrument configurations and noise conditions. The system has operated since July 2011, making it the longest-running real-time commensal radio transient experiment to date
Calibration and Stokes Imaging with Full Embedded Element Primary Beam Model for the Murchison Widefield Array
15 pages, 11 figures. Accepted for publication in PASA. © Astronomical Society of Australia 2017The Murchison Widefield Array (MWA), located in Western Australia, is one of the low-frequency precursors of the international Square Kilometre Array (SKA) project. In addition to pursuing its own ambitious science program, it is also a testbed for wide range of future SKA activities ranging from hardware, software to data analysis. The key science programs for the MWA and SKA require very high dynamic ranges, which challenges calibration and imaging systems. Correct calibration of the instrument and accurate measurements of source flux densities and polarisations require precise characterisation of the telescope's primary beam. Recent results from the MWA GaLactic Extragalactic All-sky MWA (GLEAM) survey show that the previously implemented Average Embedded Element (AEE) model still leaves residual polarisations errors of up to 10-20 % in Stokes Q. We present a new simulation-based Full Embedded Element (FEE) model which is the most rigorous realisation yet of the MWA's primary beam model. It enables efficient calculation of the MWA beam response in arbitrary directions without necessity of spatial interpolation. In the new model, every dipole in the MWA tile (4 x 4 bow-tie dipoles) is simulated separately, taking into account all mutual coupling, ground screen and soil effects, and therefore accounts for the different properties of the individual dipoles within a tile. We have applied the FEE beam model to GLEAM observations at 200 - 231 MHz and used false Stokes parameter leakage as a metric to compare the models. We have determined that the FEE model reduced the magnitude and declination-dependent behaviour of false polarisation in Stokes Q and V while retaining low levels of false polarisation in Stokes U.Peer reviewedFinal Accepted Versio
The Commensal Real-time ASKAP Fast Transients (CRAFT) survey
We are developing a purely commensal survey experiment for fast (<5s)
transient radio sources. Short-timescale transients are associated with the
most energetic and brightest single events in the Universe. Our objective is to
cover the enormous volume of transients parameter space made available by
ASKAP, with an unprecedented combination of sensitivity and field of view. Fast
timescale transients open new vistas on the physics of high brightness
temperature emission, extreme states of matter and the physics of strong
gravitational fields. In addition, the detection of extragalactic objects
affords us an entirely new and extremely sensitive probe on the huge reservoir
of baryons present in the IGM. We outline here our approach to the considerable
challenge involved in detecting fast transients, particularly the development
of hardware fast enough to dedisperse and search the ASKAP data stream at or
near real-time rates. Through CRAFT, ASKAP will provide the testbed of many of
the key technologies and survey modes proposed for high time resolution science
with the SKA.Comment: accepted for publication in PAS
- …
