10 research outputs found

    Low-power System-on-Chip Processors for Energy Efficient High Performance Computing: The Texas Instruments Keystone II

    No full text
    The High Performance Computing (HPC) community recognizes energy consumption as a major problem. Extensive research is underway to identify means to increase energy efficiency of HPC systems including consideration of alternative building blocks for future systems. This thesis considers one such system, the Texas Instruments Keystone II, a heterogeneous Low-Power System-on-Chip (LPSoC) processor that combines a quad core ARM CPU with an octa-core Digital Signal Processor (DSP). It was first released in 2012. Four issues are considered: i) maximizing the Keystone II ARM CPU performance; ii) implementation and extension of the OpenMP programming model for the Keystone II; iii) simultaneous use of ARM and DSP cores across multiple Keystone SoCs; and iv) an energy model for applications running on LPSoCs like the Keystone II and heterogeneous systems in general. Maximizing the performance of the ARM CPU on the Keystone II system is fundamental to adoption of this system by the HPC community and, of the ARM architecture more broadly. Key to achieving good performance is exploitation of the ARM vector instructions. This thesis presents the first detailed comparison of the use of ARM compiler intrinsic functions with automatic compiler vectorization across four generations of ARM processors. Comparisons are also made with x86 based platforms and the use of equivalent Intel vector instructions. Implementation of the OpenMP programming model on the Keystone II system presents both challenges and opportunities. Challenges in that the OpenMP model was originally developed for a homogeneous programming environment with a common instruction set architecture, and in 2012 work had only just begun to consider how OpenMP might work with accelerators. Opportunities in that shared memory is accessible to all processing elements on the LPSoC, offering performance advantages over what typically exists with attached accelerators. This thesis presents an analysis of a prototype version of OpenMP implemented as a bare-metal runtime on the DSP of a Keystone I system. An implementation for the Keystone II that maps OpenMP 4.0 accelerator directives to OpenCL runtime library operations is presented and evaluated. Exploitation of some of the underlying hardware features of the Keystone II is also discussed. Simultaneous use of the ARM and DSP cores across multiple Keystone II boards is fundamental to the creation of commercially viable HPC offerings based on Keystone technology. The nCore BrownDwarf and HPE Moonshot systems represent two such systems. This thesis presents a proof-of-concept implementation of matrix multiplication (GEMM) for the BrownDwarf system. The BrownDwarf utilizes both Keystone II and Keystone I SoCs through a point-to-point interconnect called Hyperlink. Details of how a novel message passing communication framework across Hyperlink was implemented to support this complex environment are provided. An energy model that can be used to predict energy usage as a function of what fraction of a particular computation is performed on each of the available compute devices offers the opportunity for making runtime decisions on how best to minimize energy usage. This thesis presents a basic energy usage model that considers rates of executions on each device and their active and idle power usages. Using this model, it is shown that only under certain conditions does there exist an energy-optimal work partition that uses multiple compute devices. To validate the model a high resolution energy measurement environment is developed and used to gather energy measurements for a matrix multiplication benchmark running on a variety of systems. Results presented support the model. Drawing on the four issues noted above and other developments that have occurred since the Keystone II system was first announced, the thesis concludes by making comments regarding the future of LPSoCs as building blocks for HPC systems

    Streaming Architectures for Medical Image Reconstruction

    Full text link
    Non-invasive imaging modalities have recently seen increased use in clinical diagnostic procedures. Unfortunately, emerging computational imaging techniques, such as those found in 3D ultrasound and iterative magnetic resonance imaging (MRI), are severely limited by the high computational requirements and poor algorithmic efficiency in current arallel hardware---often leading to significant delays before a doctor or technician can review the image, which can negatively impact patients in need of fast, highly accurate diagnosis. To make matters worse, the high raw data bandwidth found in 3D ultrasound requires on-chip volume reconstruction with a tight power dissipation budget---dissipation of more than 5~W may burn the skin of the patient. The tight power constraints and high volume rates required by emerging applications require orders of magnitude improvement over state-of-the-art systems in terms of both reconstruction time and energy efficiency. The goal of the research outlined in this dissertation is to reduce the time and energy required to perform medical image reconstruction through software/hardware co-design. By analyzing algorithms with a hardware-centric focus, we develop novel algorithmic improvements which simultaneously reduce computational requirements and map more efficiently to traditional hardware architectures. We then design and implement hardware accelerators which push the new algorithms to their full potential. In the first part of this dissertation, we characterize the performance bottlenecks of high-volume-rate 3D ultrasound imaging. By analyzing the 3D plane-wave ultrasound algorithm, we reduce computational and storage requirements in Delay Compression. Delay Compression recognizes additional symmetry in the planar transmission scheme found in 2D, 3D, and 3D-Separable plane-wave ultrasound implementations, enabling on-chip storage of the reconstruction constants for the first time and eliminating the ost power-intensive component of the reconstruction process. We then design and implement Tetris, a streaming hardware accelerator for 3D-Separable plane-wave ultrasound. Tetris is enabled by the Tetris Reserveration Station, a novel 2D register file that buffers incomplete voxels and eliminates the need for a traditional load-and-store memory interface. Utilizing a fully pipelined architecture, Tetris reconstructs volumes at physics-limited rates (i.e., limited by the physical propagation speed of sound through tissue). Next, we review a core component of several computational imaging modalities, the Non-uniform Fast Fourier Transform (NuFFT), focusing on its use in MRI reconstruction. We find that the non-uniform interpolation step therein requires over 99% of the reconstruction time due to poor spatial and temporal memory locality. While prior work has made great strides in improving the performance of the NuFFT, the most common algorithmic optimization severely limits the available parallelism, causing it to map poorly to the massively parallel processing available in modern GPUs and FPGAs. To this end, we create Slice-and-Dice, a processing model which enables efficient mapping of the NuFFT's most computationally-intensive component onto traditional parallel architectures. We then demonstrate the full acceleration potential of Slice-and-Dice with Jigsaw, a custom hardware accelerator which performs the non-uniform interpolations found in the NuFFT in time approximately linear in the number of non-uniform samples, rrespective of sampling pattern, uniform grid size, or interpolation kernel width. The algorithms and architectures herein enable faster, more efficient medical image reconstruction, without sacrificing image quality. By decreasing the time and energy required for image reconstruction, our work opens the door for future exploration into higher-resolution imaging and emerging, computationally complex reconstruction algorithms which improve the speed and quality of patient diagnosis.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/167986/1/westbl_1.pd

    Digital Signal Processor Based Real-Time Phased Array Radar Backend System and Optimization Algorithms

    Get PDF
    This dissertation presents an implementation of multifunctional large-scale phased array radar based on the scalable DSP platform. The challenge of building large-scale phased array radar backend is how to address the compute-intensive operations and high data throughput requirement in both front-end and backend in real-time. In most of the applications, FPGA or VLSI hardware are typically used to solve those difficulties. However, with the help of the fast development of IC industry, using a parallel set of high-performing programmable chips can be an alternative. We present a hybrid high-performance backend system by using DSP as the core computing device and MTCA as the system frame. Thus, the mapping techniques for the front and backend signal processing algorithm based on DSP are discussed in depth. Beside high-efficiency computing device, the system architecture would be a major factor influencing the reliability and performance of the backend system. The reliability requires the system must incorporate the redundancy both in hardware and software. In this dissertation, we propose a parallel modular system based on MTCA chassis, which can be reliable, scalable, and fault-tolerant. Finally, we present an example of high performance phased array radar backend, in which there is the number of 220 DSPs, achieving 7000 GFLOPS calculation from 768 channels. This example shows the potential of using the combination of DSP and MTCA as the computing platform for the future multi-functional large-scale phased array radar

    Consumo energético de métodos iterativos para sistemas dispersos en procesadores gráficos

    Get PDF
    La resolución de sistemas de ecuaciones lineales dispersos de gran dimensión es una de las operaciones más comunes en aplicaciones científicas y de ingeniería. El aumento de sus tamaños propicia el desarrollo de técnicas de Green Computing, que permiten diseñar aplicaciones conscientes de la energía, en las que la eficiencia energética es el objetivo prioritario. En este Tesis Doctoral se ha diseñado una metodología basada en “técnicas de fusionado de kernels CUDA” que reduce el número de kernels, y con ello, costes de lanzamiento y transferencias de información. Su uso, junto con la sincronización de las GPUs en modo blocking, permite reducir el consumo energético en sistemas de cómputo heterogéneo, CPU-GPU. Estas técnicas tienen especial interés en GPUs que soporten paralelismo dinámico. La aplicación de esta metodología en la resolución de sistemas de ecuaciones lineales dispersos muestra mejoras destacables en eficiencia energética, obteniendo un compromiso entre rendimiento y consumo energético

    Advanced Aviation Weather Radar Data Processing and Real-Time Implementations

    Get PDF
    The objectives of this dissertation work are developing an enhanced intelligent radar signal and data processing framework for aviation hazard detection, classification and monitoring, and real-time implementation on massive parallel platforms. Variety of radar sensor platforms are used to prove the concept including airborne precipitation radar and different ground weather radars. As a focused example of the proposed approach, this research applies evolutionary machine learning technology to turbulence level classification for civil aviation. An artificial neural network (ANN) machine learning approach based on radar observation is developed for classifying the cubed root of the Eddy Dissipation Rate (EDR), a widely-accepted measure of turbulence intensity. The approach is validated using typhoon weather data collected by Hong Kong Observatory’s (HKO) Terminal Doppler Weather Radar (TDWR) located near Hong Kong International Airport (HKIA) and comparing HKO-TDWR EDR1/3^{1/3} detections and predictions with in situ EDR1/3^{1/3} measured by commercial aircrafts. The testing results verified that machine learning approach performs reasonably well for both detecting and predicting tasks. As the preliminary step to explore the possibility of acceleration by integrating General Purpose Graphic Processing Unit (GPGPU), this research introduces a practical approach to implement real-time processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar. After the investigation of the GPGPU on radar signal processing chain, the benchmark of applying machine learning approach on embedded GPU platform was performed. According to the performance, real-time requirement of the machine learning method of turbulence detection developed in this research could be met as well as Size, Weight and Power (SWaP) restrictions on embedded GPGPU platforms

    Use of Software Tools to Implement Quality Control of Ultrasound Images in a Large Clinical Trial

    Get PDF
    Research Question This thesis aims to answer the question as to whether software tools might be developed for automating the analysis of images used to measure ovaries in transvaginal sonography (TVS) exams. Such tools would allow the routine collection of independent and objective metrics at low cost and might be used to drive a programme of continuous Quality Improvement (QI) in TVS scanning. The tools will be assessed by processing images from thousands of TVS exams performed by the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS). Background This research is important because TVS is core to any ovarian cancer (OC) screening strategy yet independent and objective quality control (QC) metrics for this procedure are not routinely obtained due to the high cost of manual image inspection. Improving the quality of TVS in the National Health Service (NHS) would assist in the early diagnosis of the disease and result in improved outcome for some women. Therefore, the research has clear translational potential for the >1.2 million scans performed annually by the NHS. Research Findings A study performed to process images from 1,000 TVS exams has shown the tool produces accurate and reliable QC metrics. A further study revealed that over half of these exams should have been classified as unsatisfactory as an expert review of the images showed that that the sonographer had mistakenly measured a structure that was not an ovary. It also reported a correlation between such ovary visualisation and a novel metric (DCR) measured by the tools from the examination images. Conclusion The research results suggest both a need to improve the quality of TVS scanning and the viability of achieving this objective by introducing a QI programme driven by metrics gathered by software tools able to analyze the images used to measure ovaries
    corecore