42 research outputs found

    Towards exascale real-time RFI mitigation

    Full text link
    We describe the design and implementation of an extremely scalable real-time RFI mitigation method, based on the offline AOFlagger. All algorithms scale linearly in the number of samples. We describe how we implemented the flagger in the LOFAR real-time pipeline, on both CPUs and GPUs. Additionally, we introduce a novel simple history-based flagger that helps reduce the impact of our small window on the data. By examining an observation of a known pulsar, we demonstrate that our flagger can achieve much higher quality than a simple thresholder, even when running in real time, on a distributed system. The flagger works on visibility data, but also on raw voltages, and beam formed data. The algorithms are scale-invariant, and work on microsecond to second time scales. We are currently implementing a prototype for the time domain pipeline of the SKA central signal processor.Comment: 2016 Radio Frequency Interference (RFI2016) Coexisting with Radio Frequency Interference, Socorro, New Mexico, USA, October 201

    Real-Time Dedispersion for Fast Radio Transient Surveys, using Auto Tuning on Many-Core Accelerators

    Full text link
    Dedispersion, the removal of deleterious smearing of impulsive signals by the interstellar matter, is one of the most intensive processing steps in any radio survey for pulsars and fast transients. We here present a study of the parallelization of this algorithm on many-core accelerators, including GPUs from AMD and NVIDIA, and the Intel Xeon Phi. We find that dedispersion is inherently memory-bound. Even in a perfect scenario, hardware limitations keep the arithmetic intensity low, thus limiting performance. We next exploit auto-tuning to adapt dedispersion to different accelerators, observations, and even telescopes. We demonstrate that the optimal settings differ between observational setups, and that auto-tuning significantly improves performance. This impacts time-domain surveys from Apertif to SKA.Comment: 8 pages, accepted for publication in Astronomy and Computin

    Deep Learning Assisted Data Inspection for Radio Astronomy

    Get PDF
    Modern radio telescopes combine thousands of receivers, long-distance networks, large-scale compute hardware, and intricate software. Due to this complexity, failures occur relatively frequently. In this work we propose novel use of unsupervised deep learning to diagnose system health for modern radio telescopes. The model is a convolutional Variational Autoencoder (VAE) that enables the projection of the high dimensional time-frequency data to a low-dimensional prescriptive space. Using this projection, telescope operators are able to visually inspect failures thereby maintaining system health. We have trained and evaluated the performance of the VAE quantitatively in controlled experiments on simulated data from HERA. Moreover, we present a qualitative assessment of the the model trained and tested on real LOFAR data. Through the use of a naive SVM classifier on the projected synthesised data, we show that there is a trade-off between the dimensionality of the projection and the number of compounded features in a given spectrogram. The VAE and SVM combination scores between 65% and 90% accuracy depending on the number of features in a given input. Finally, we show the prototype system-health-diagnostic web framework that integrates the evaluated model. The system is currently undergoing testing at the ASTRON observatory

    Lightning Talk:"I solemnly pledge" A Manifesto for Personal Responsibility in the Engineering of Academic Software

    Get PDF
    International audienceSoftware is fundamental to academic research work, both as part of the method and as the result of research. In June 2016 25 people gathered at Schloss Dagstuhl for a week-long Perspectives Workshop and began to develop a manifesto which places emphasis on the scholarly value of academic software and on personal responsibility. Twenty pledges cover the recognition of academic software, the academic software process and the intellectual content of academic software. This is still work in progress. Through this lightning talk, we aim to get feedback and hone these further, as well as to inspire the WSSSPE audience to think about actions they can take themselves rather than actions they want others to take. We aim to publish a more fully developed Dagstuhl Manifesto by December 2016

    Using many-core hardware to correlate radio astronomy signals

    No full text
    A recent development in radio astronomy is to replace traditional disheswithmanysmallantennas. Thesignalsarecombinedtoform one large, virtual telescope. The enormous data streams are crosscorrelated to filter out noise. This is especially challenging, since the computational demands grow quadratically withthe number of data streams. Moreover, the correlator is not only computationally intensive, but also very I/O intensive. The LOFAR telescope, for instance, will produce over 100 terabytes per day. The future SKA telescope will even require in the order of exaflops, and petabits/s of I/O. A recent trend is to correlate in software instead of dedicated hardware. This is done to increase flexibility and to reduce development efforts. Examples include e-VLBIand LOFAR. In this paper, we evaluate the correlator algorithm on multi-core CPUsandmany-corearchitectures,suchasNVIDIAandATIGPUs, and the Cell/B.E. The correlator is a streaming, real-time application, and is much more I/O intensive than applications that are typically implementedon many-core hardware today. We compare withtheLOFARproductioncorrelatoronanIBMBlueGene/Psupercomputer. We investigate performance, power efficiency, and programmability. Weidentifyseveralimportant architecturalproblems which cause architectures to perform suboptimally. Our findings areapplicable todata-intensive applications ingeneral. The results show that the processing power and memory bandwidth of current GPUs are highly imbalanced for correlation purposes. WhiletheproductioncorrelatorontheBlueGene/Pachieves asuperb96%ofthetheoreticalpeakperformance, thisisonly14% on ATI GPUs, and 26 % on NVIDIA GPUs. The Cell/B.E. processor, in contrast, achieves an excellent 92%. We found that the Cell/B.E.is also the most energy-efficient solution, it runs the correlator5-7timesmoreenergyefficientlythantheBlueGene/P.The research presented is an important pathfinder for next-generation telescopes. Categories andSubject Descriptors D.1.3 [Programming Techniques]: Concurrent programming
    corecore