8 research outputs found

    Multi-frequency image reconstruction for radio-interferometry with self-tuned regularization parameters

    Full text link
    As the world's largest radio telescope, the Square Kilometer Array (SKA) will provide radio interferometric data with unprecedented detail. Image reconstruction algorithms for radio interferometry are challenged to scale well with TeraByte image sizes never seen before. In this work, we investigate one such 3D image reconstruction algorithm known as MUFFIN (MUlti-Frequency image reconstruction For radio INterferometry). In particular, we focus on the challenging task of automatically finding the optimal regularization parameter values. In practice, finding the regularization parameters using classical grid search is computationally intensive and nontrivial due to the lack of ground- truth. We adopt a greedy strategy where, at each iteration, the optimal parameters are found by minimizing the predicted Stein unbiased risk estimate (PSURE). The proposed self-tuned version of MUFFIN involves parallel and computationally efficient steps, and scales well with large- scale data. Finally, numerical results on a 3D image are presented to showcase the performance of the proposed approach

    An end-to-end computing model for the Square Kilometre Array

    No full text
    For next-generation radio telescopes such as the Square Kilometre Array, seemingly minor changes in scientific constraints can easily push computing requirements into the exascale domain. The authors propose a model for engineers and astronomers to understand these relations and make tradeoffs in future instrument designs

    Design of a novel X-section architecture for FX-correlator in large interferometers : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Auckland, New Zealand

    Get PDF
    Figures 2-12 and 2-17 are re-used under CC BY-NC 4.0 International & CC 3.0 Unported Licences respectively.Published journal papers I-III in the Appendices were removed because they are subject to copyright restrictions.In large radio-interferometers it is considerably challenging to perform signal correlations at input data-rates of over 11 Tbps, which involves vast amount of storage, memory bandwidth and computational hardware. The primary objective of this research work is to focus on reducing the memory-access and design complexity in matrix architectural Big Data processing of the complex X-section of an FX-correlator employed in large array radio-telescopes. This thesis presents a dedicated correlator-system-multiplier-and -accumulator (CoSMAC) cell architecture based on the real input samples from antenna arrays which produces two 16-bit complex multiplications in the same clock cycle. The novel correlator cell optimization is achieved by utilizing the flipped mirror relationship between Discrete Fourier transform (DFT) samples owing to the symmetry and periodicity of the DFT coefficient vectors. The proposed CoSMAC structure is extended to build a new processing element (PE) which calculates both cross- correlation visibilities and auto-correlation functions simultaneously. Further, a novel mathematical model and a hardware design is derived to calculate two visibilities per baseline for the Quadrature signals (IQ sampled signals, where I is In-phase signal and Q is the 90 degrees phase shifted signal) named as Processing Element for IQ sampled signals (PE_IQ). These three proposed dedicated correlator cells minimise the number of visibility calculations in a baseline. The design methodology also targets the optimisation of the multiplier size in order to reduce the power and area further in the CoSMAC, PE and PE_IQ. Various fast and efficient multiplier algorithms are compared and combined to achieve a novel multiplier named Modified-Booth-Wallace-Multiplier and implemented in the CoSMAC and PE cells. The dedicated multiplier is designed to mostly target the area and power optimisations without degrading the performance. The conventional complex-multiplier-and-accumulators (CMACs) employed to perform the complex multiplications are replaced with these dedicated ASIC correlator cells along with the optimized multipliers to reduce the overall power and area requirements in a matrix correlator architecture. The proposed architecture lowers the number of ASIC processor cells required to calculate the overall baselines in an interferometer by eliminating the redundant cells. Hence the new matrix architectural minimization is very effective in reducing the hardware complexity by nearly 50% without affecting the overall speed and performance of very large interferometers like the Square Kilometre Array (SKA)

    Machine learning-based anomaly detection for radio telescopes

    Get PDF
    Radio telescopes are getting bigger and are generating increasing amounts of data to improve their sensitivity and resolution. The growing system size and resulting complexity increases the likelihood of unexpected events occurring thereby producing datasets containing anomalies. These events include failures in instrument electronics, miscalibrated observations, environmental and astronomical effects such as lightning and solar storms as well as problems in data processing systems among many more. Currently, efforts to diagnose and mitigate these events are performed by human operators, who manually inspect intermediate data products to determine the success or failure of a given observation. The accelerating data-rates coupled with the lack of automation results in operator-based data quality inspection becoming increasingly infeasible.This thesis focuses on applying machine learning-based anomaly detection to spectrograms obtained from the LOFAR telescope for the purpose of System Health Management (SHM). It does this across several chapters, with each chapter focusing on a different aspect of SHM in radio telescopes. We provide an overview of the data processing systems in LOFAR so to create a workflow for SHM that could effectively be integrated into the scientific data processing pipeline

    Searching for Needles in the Cosmic Haystack

    Get PDF
    Searching for pulsar signals in radio astronomy data sets is a difficult task. The data sets are extremely large, approaching the petabyte scale, and are growing larger as instruments become more advanced. Big Data brings with it big challenges. Processing the data to identify candidate pulsar signals is computationally expensive and must utilize parallelism to be scalable. Labeling benchmarks for supervised classification is costly. To compound the problem, pulsar signals are very rare, e.g., only 0.05% of the instances in one data set represent pulsars. Furthermore, there are many different approaches to candidate classification with no consensus on a best practice. This dissertation is focused on identifying and classifying radio pulsar candidates from single pulse searches. First, to identify and classify Dispersed Pulse Groups (DPGs), we developed a supervised machine learning approach that consists of RAPID (a novel peak identification algorithm), feature extraction, and supervised machine learning classification. We tested six algorithms for classification with four imbalance treatments. Results showed that classifiers with imbalance treatments had higher recall values. Overall, classifiers using multiclass RandomForests combined with Synthetic Majority Oversampling TEchnique (SMOTE) were the most efficient; they identified additional known pulsars not in the benchmark, with less false positives than other classifiers. Second, we developed a parallel single pulse identification method, D-RAPID, and introduced a novel automated multiclass labeling (ALM) technique that we combined with feature selection to improve execution performance. D-RAPID improved execution performance over RAPID by a factor of 5. We also showed that the combination of ALM and feature selection sped up the execution performance of RandomForest by 54% on average with less than a 2% average reduction in classification performance. Finally, we proposed CoDRIFt, a novel classification algorithm that is distributed for scalability and employs semi-supervised learning to leverage unlabeled data to inform classification. We evaluated and compared CoDRIFt to eleven other classifiers. The results showed that CoDRIFt excelled at classifying candidates in imbalanced benchmarks with a majority of non-pulsar signals (\u3e95%). Furthermore, CoDRIFt models created with very limited sets of labeled data (as few as 22 labeled minority class instances) were able to achieve high recall (mean = 0.98). In comparison to the other algorithms trained on similar sets, CoDRIFt outperformed them all, with recall 2.9% higher than the next best classifier and a 35% average improvement over all eleven classifiers. CoDRIFt is customizable for other problem domains with very large, imbalanced data sets, such as fraud detection and cyber attack detection

    Exascale computer system design : the square kilometre array

    Get PDF

    Analytical cost metrics: days of future past

    Get PDF
    2019 Summer.Includes bibliographical references.Future exascale high-performance computing (HPC) systems are expected to be increasingly heterogeneous, consisting of several multi-core CPUs and a large number of accelerators, special-purpose hardware that will increase the computing power of the system in a very energy-efficient way. Specialized, energy-efficient accelerators are also an important component in many diverse systems beyond HPC: gaming machines, general purpose workstations, tablets, phones and other media devices. With Moore's law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. This work builds analytical cost models for cost metrics such as time, energy, memory access, and silicon area. These models are used to predict the performance of applications, for performance tuning, and chip design. The idea is to work with domain specific accelerators where analytical cost models can be accurately used for performance optimization. The performance optimization problems are formulated as mathematical optimization problems. This work explores the analytical cost modeling and mathematical optimization approach in a few ways. For stencil applications and GPU architectures, the analytical cost models are developed for execution time as well as energy. The models are used for performance tuning over existing architectures, and are coupled with silicon area models of GPU architectures to generate highly efficient architecture configurations. For matrix chain products, analytical closed form solutions for off-chip data movement are built and used to minimize the total data movement cost of a minimum op count tree
    corecore