177 research outputs found
Real-Time Dedispersion for Fast Radio Transient Surveys, using Auto Tuning on Many-Core Accelerators
Dedispersion, the removal of deleterious smearing of impulsive signals by the
interstellar matter, is one of the most intensive processing steps in any radio
survey for pulsars and fast transients. We here present a study of the
parallelization of this algorithm on many-core accelerators, including GPUs
from AMD and NVIDIA, and the Intel Xeon Phi. We find that dedispersion is
inherently memory-bound. Even in a perfect scenario, hardware limitations keep
the arithmetic intensity low, thus limiting performance. We next exploit
auto-tuning to adapt dedispersion to different accelerators, observations, and
even telescopes. We demonstrate that the optimal settings differ between
observational setups, and that auto-tuning significantly improves performance.
This impacts time-domain surveys from Apertif to SKA.Comment: 8 pages, accepted for publication in Astronomy and Computin
08332 Executive Summary -- Distributed Verification and Grid Computing
The Dagstuhl Seminar on Distributed Verification and Grid
Computing took place from 10.08.2008 to 14.08.2008 and brought
together two groups of researchers to discuss their recent work and
recent trends related to parallel verification of large scale computer
systems on large scale grids. In total, 29 experts from 12 countries
attended the seminar
08332 Abstracts Collection -- Distributed Verification and Grid Computing
From 08/10/2008 to 08/14/2008 the Dagstuhl Seminar 08332 ``Distributed Verification and Grid Computing\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
On optimising cost and value in eScience:Case studies in radio astronom
Large-scale science instruments, such as the LHC and recent distributed radio telescopes such as LOFAR, show that we are in an era of data-intensive scientific discovery. All of these instruments rely critically on significant eScience resources, both hardware and software, to do science. Considering limited science budgets, and the small fraction of these that can be dedicated to compute hardware and software, there is a strong and obvious desire for low-cost computing. However, optimizing for cost is only half of the equation, the value potential over the lifetime of the instrument should also be taken into account. Using a tangible example, compute hardware, we introduce a conceptual model to approximate the lifetime relative science merit of such a system. With a number of case studies, focused on eScience applications in radio astronomy past, present and future, we show that the hardware-based analysis can be applied more broadly. While the introduced model is not intended to result in a numeric value for merit, it does enumerate some components that define this metric
Distributed MAP in the SpinJa Model Checker
Spin in Java (SpinJa) is an explicit state model checker for the Promela
modelling language also used by the SPIN model checker. Designed to be
extensible and reusable, the implementation of SpinJa follows a layered
approach in which each new layer extends the functionality of the previous one.
While SpinJa has preliminary support for shared-memory model checking, it did
not yet support distributed-memory model checking. This tool paper presents a
distributed implementation of a maximal accepting predecessors (MAP) search
algorithm on top of SpinJa.Comment: In Proceedings PDMC 2011, arXiv:1111.006
- …