72 research outputs found
A new parameter space study of cosmological microlensing
Cosmological gravitational microlensing is a useful technique for
understanding the structure of the inner parts of a quasar, especially the
accretion disk and the central supermassive black hole. So far, most of the
cosmological microlensing studies have focused on single objects from ~90
currently known lensed quasars. However, present and planned all-sky surveys
are expected to discover thousands of new lensed systems. Using a graphics
processing unit (GPU) accelerated ray-shooting code, we have generated 2550
magnification maps uniformly across the convergence ({\kappa}) and shear
({\gamma}) parameter space of interest to microlensing. We examine the effect
of random realizations of the microlens positions on map properties such as the
magnification probability distribution (MPD). It is shown that for most of the
parameter space a single map is representative of an average behaviour. All of
the simulations have been carried out on the GPU-Supercomputer for Theoretical
Astrophysics Research (gSTAR).Comment: 16 pages, 10 figures, accepted for publication in MNRA
Data Compression in the Petascale Astronomy Era: a GERLUMPH case study
As the volume of data grows, astronomers are increasingly faced with choices
on what data to keep -- and what to throw away. Recent work evaluating the
JPEG2000 (ISO/IEC 15444) standards as a future data format standard in
astronomy has shown promising results on observational data. However, there is
still a need to evaluate its potential on other type of astronomical data, such
as from numerical simulations. GERLUMPH (the GPU-Enabled High Resolution
cosmological MicroLensing parameter survey) represents an example of a data
intensive project in theoretical astrophysics. In the next phase of processing,
the ~27 terabyte GERLUMPH dataset is set to grow by a factor of 100 -- well
beyond the current storage capabilities of the supercomputing facility on which
it resides. In order to minimise bandwidth usage, file transfer time, and
storage space, this work evaluates several data compression techniques.
Specifically, we investigate off-the-shelf and custom lossless compression
algorithms as well as the lossy JPEG2000 compression format. Results of
lossless compression algorithms on GERLUMPH data products show small
compression ratios (1.35:1 to 4.69:1 of input file size) varying with the
nature of the input data. Our results suggest that JPEG2000 could be suitable
for other numerical datasets stored as gridded data or volumetric data. When
approaching lossy data compression, one should keep in mind the intended
purposes of the data to be compressed, and evaluate the effect of the loss on
future analysis. In our case study, lossy compression and a high compression
ratio do not significantly compromise the intended use of the data for
constraining quasar source profiles from cosmological microlensing.Comment: 15 pages, 9 figures, 5 tables. Published in the Special Issue of
Astronomy & Computing on The future of astronomical data format
Accelerating the Rate of Astronomical Discovery with GPU-Powered Clusters
In recent years, the Graphics Processing Unit (GPU) has emerged as a low-cost
alternative for high performance computing, enabling impressive speed-ups for a
range of scientific computing applications. Early adopters in astronomy are
already benefiting in adapting their codes to take advantage of the GPU's
massively parallel processing paradigm. I give an introduction to, and overview
of, the use of GPUs in astronomy to date, highlighting the adoption and
application trends from the first ~100 GPU-related publications in astronomy. I
discuss the opportunities and challenges of utilising GPU computing clusters,
such as the new Australian GPU supercomputer, gSTAR, for accelerating the rate
of astronomical discovery.Comment: To appear in the proceedings of ADASS XXI, ed. P.Ballester and
D.Egret, ASP Conf. Se
Advanced Architectures for Astrophysical Supercomputing
Astronomers have come to rely on the increasing performance of computers to
reduce, analyze, simulate and visualize their data. In this environment, faster
computation can mean more science outcomes or the opening up of new parameter
spaces for investigation. If we are to avoid major issues when implementing
codes on advanced architectures, it is important that we have a solid
understanding of our algorithms. A recent addition to the high-performance
computing scene that highlights this point is the graphics processing unit
(GPU). The hardware originally designed for speeding-up graphics rendering in
video games is now achieving speed-ups of in general-purpose
computation -- performance that cannot be ignored. We are using a generalized
approach, based on the analysis of astronomy algorithms, to identify the
optimal problem-types and techniques for taking advantage of both current GPU
hardware and future developments in computing architectures.Comment: 4 pages, 1 figure, to appear in the proceedings of ADASS XIX, Oct 4-8
2009, Sapporo, Japan (ASP Conf. Series
GPU-Based Volume Rendering of Noisy Multi-Spectral Astronomical Data
Traditional analysis techniques may not be sufficient for astronomers to make
the best use of the data sets that current and future instruments, such as the
Square Kilometre Array and its Pathfinders, will produce. By utilizing the
incredible pattern-recognition ability of the human mind, scientific
visualization provides an excellent opportunity for astronomers to gain
valuable new insight and understanding of their data, particularly when used
interactively in 3D. The goal of our work is to establish the feasibility of a
real-time 3D monitoring system for data going into the Australian SKA
Pathfinder archive.
Based on CUDA, an increasingly popular development tool, our work utilizes
the massively parallel architecture of modern graphics processing units (GPUs)
to provide astronomers with an interactive 3D volume rendering for
multi-spectral data sets. Unlike other approaches, we are targeting real time
interactive visualization of datasets larger than GPU memory while giving
special attention to data with low signal to noise ratio - two critical aspects
for astronomy that are missing from most existing scientific visualization
software packages. Our framework enables the astronomer to interact with the
geometrical representation of the data, and to control the volume rendering
process to generate a better representation of their datasets.Comment: 4 pages, 1 figure, to appear in the proceedings of ADASS XIX, Oct 4-8
2009, Sapporo, Japan (ASP Conf. Series
Spotting Radio Transients with the help of GPUs
Exploration of the time-domain radio sky has huge potential for advancing our
knowledge of the dynamic universe. Past surveys have discovered large numbers
of pulsars, rotating radio transients and other transient radio phenomena;
however, they have typically relied upon off-line processing to cope with the
high data and processing rate. This paradigm rules out the possibility of
obtaining high-resolution base-band dumps of significant events or of
performing immediate follow-up observations, limiting analysis power to what
can be gleaned from detection data alone. To overcome this limitation,
real-time processing and detection of transient radio events is required. By
exploiting the significant computing power of modern graphics processing units
(GPUs), we are developing a transient-detection pipeline that runs in real-time
on data from the Parkes radio telescope. In this paper we discuss the
algorithms used in our pipeline, the details of their implementation on the GPU
and the challenges posed by the presence of radio frequency interference.Comment: 4 Pages. To appear in the proceedings of ADASS XXI, ed. P.Ballester
and D.Egret, ASP Conf. Serie
- …