300 research outputs found

    Parallel Image Generation Using the Z-buffering Algorithm on a Medium Grained Distributed Memory Model Computer

    Get PDF
    Computer Scienc

    Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines

    Get PDF
    Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.United States. Dept. of Energy (Award DE-SC0005288)National Science Foundation (U.S.) (Grant 0964004)Intel CorporationCognex CorporationAdobe System

    Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines

    Get PDF
    Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.United States. Dept. of Energy (Award DE-SC0005288)National Science Foundation (U.S.) (Grant 0964004)Intel CorporationCognex CorporationAdobe System

    The Computer Graphics Scene in the United States

    Get PDF
    We briefly survey the major thrusts of computer graphics activities, examining trends and topics rather than offering a comprehensive survey of all that is happening. The directions of professional activities, hardware, software, and algorithms are outlined. Within hardware we examine workstations, personal graphics systems, high performance systems, and low level VLSI chips; within software, standards and interactive system design; within algorithms, visible surface rendering and shading, three-dimensional modeling techniques, and animation. Note: This paper was presented at Eurographics\u2784 in Copenhagen, Denmark

    A massively parallel semi-Lagrangian solver for the six-dimensional Vlasov-Poisson equation

    Full text link
    This paper presents an optimized and scalable semi-Lagrangian solver for the Vlasov-Poisson system in six-dimensional phase space. Grid-based solvers of the Vlasov equation are known to give accurate results. At the same time, these solvers are challenged by the curse of dimensionality resulting in very high memory requirements, and moreover, requiring highly efficient parallelization schemes. In this paper, we consider the 6d Vlasov-Poisson problem discretized by a split-step semi-Lagrangian scheme, using successive 1d interpolations on 1d stripes of the 6d domain. Two parallelization paradigms are compared, a remapping scheme and a classical domain decomposition approach applied to the full 6d problem. From numerical experiments, the latter approach is found to be superior in the massively parallel case in various respects. We address the challenge of artificial time step restrictions due to the decomposition of the domain by introducing a blocked one-sided communication scheme for the purely electrostatic case and a rotating mesh for the case with a constant magnetic field. In addition, we propose a pipelining scheme that enables to hide the costs for the halo communication between neighbor processes efficiently behind useful computation. Parallel scalability on up to 65k processes is demonstrated for benchmark problems on a supercomputer

    Optimization of high-throughput real-time processes in physics reconstruction

    Get PDF
    La presente tesis se ha desarrollado en colaboración entre la Universidad de Sevilla y la Organización Europea para la Investigación Nuclear, CERN. El detector LHCb es uno de los cuatro grandes detectores situados en el Gran Colisionador de Hadrones, LHC. En LHCb, se colisionan partículas a altas energías para comprender la diferencia existente entre la materia y la antimateria. Debido a la cantidad ingente de datos generada por el detector, es necesario realizar un filtrado de datos en tiempo real, fundamentado en los conocimientos actuales recogidos en el Modelo Estándar de física de partículas. El filtrado, también conocido como High Level Trigger, deberá procesar un throughput de 40 Tb/s de datos, y realizar un filtrado de aproximadamente 1 000:1, reduciendo el throughput a unos 40 Gb/s de salida, que se almacenan para posterior análisis. El proceso del High Level Trigger se subdivide a su vez en dos etapas: High Level Trigger 1 (HLT1) y High Level Trigger 2 (HLT2). El HLT1 transcurre en tiempo real, y realiza una reducción de datos de aproximadamente 30:1. El HLT1 consiste en una serie de procesos software que reconstruyen lo que ha sucedido en la colisión de partículas. En la reconstrucción del HLT1 únicamente se analizan las trayectorias de las partículas producidas fruto de la colisión, en un problema conocido como reconstrucción de trazas, para dictaminar el interés de las colisiones. Por contra, el proceso HLT2 es más fino, requiriendo más tiempo en realizarse y reconstruyendo todos los subdetectores que componen LHCb. Hacia 2020, el detector LHCb, así como todos los componentes del sistema de adquisici´on de datos, serán actualizados acorde a los últimos desarrollos técnicos. Como parte del sistema de adquisición de datos, los servidores que procesan HLT1 y HLT2 también sufrirán una actualización. Al mismo tiempo, el acelerador LHC será también actualizado, de manera que la cantidad de datos generada en cada cruce de grupo de partículas aumentare en aproxidamente 5 veces la actual. Debido a las actualizaciones tanto del acelerador como del detector, se prevé que la cantidad de datos que deberá procesar el HLT en su totalidad sea unas 40 veces mayor a la actual. La previsión de la escalabilidad del software actual a 2020 subestim´ó los recursos necesarios para hacer frente al incremento en throughput. Esto produjo que se pusiera en marcha un estudio de todos los algoritmos tanto del HLT1 como del HLT2, así como una actualización del código a nuevos estándares, para mejorar su rendimiento y ser capaz de procesar la cantidad de datos esperada. En esta tesis, se exploran varios algoritmos de la reconstrucción de LHCb. El problema de reconstrucción de trazas se analiza en profundidad y se proponen nuevos algoritmos para su resolución. Ya que los problemas analizados exhiben un paralelismo masivo, estos algoritmos se implementan en lenguajes especializados para tarjetas gráficas modernas (GPUs), dada su arquitectura inherentemente paralela. En este trabajo se dise ˜nan dos algoritmos de reconstrucción de trazas. Además, se diseñan adicionalmente cuatro algoritmos de decodificación y un algoritmo de clustering, problemas también encontrados en el HLT1. Por otra parte, se diseña un algoritmo para el filtrado de Kalman, que puede ser utilizado en ambas etapas. Los algoritmos desarrollados cumplen con los requisitos esperados por la colaboración LHCb para el año 2020. Para poder ejecutar los algoritmos eficientemente en tarjetas gráficas, se desarrolla un framework especializado para GPUs, que permite la ejecución paralela de secuencias de reconstrucción en GPUs. Combinando los algoritmos desarrollados con el framework, se completa una secuencia de ejecución que asienta las bases para un HLT1 ejecutable en GPU. Durante la investigación llevada a cabo en esta tesis, y gracias a los desarrollos arriba mencionados y a la colaboración de un pequeño equipo de personas coordinado por el autor, se completa un HLT1 ejecutable en GPUs. El rendimiento obtenido en GPUs, producto de esta tesis, permite hacer frente al reto de ejecutar una secuencia de reconstrucción en tiempo real, bajo las condiciones actualizadas de LHCb previstas para 2020. As´ı mismo, se completa por primera vez para cualquier experimento del LHC un High Level Trigger que se ejecuta únicamente en GPUs. Finalmente, se detallan varias posibles configuraciones para incluir tarjetas gr´aficas en el sistema de adquisición de datos de LHCb.The current thesis has been developed in collaboration between Universidad de Sevilla and the European Organization for Nuclear Research, CERN. The LHCb detector is one of four big detectors placed alongside the Large Hadron Collider, LHC. In LHCb, particles are collided at high energies in order to understand the difference between matter and antimatter. Due to the massive quantity of data generated by the detector, it is necessary to filter data in real-time. The filtering, also known as High Level Trigger, processes a throughput of 40 Tb/s of data and performs a selection of approximately 1 000:1. The throughput is thus reduced to roughly 40 Gb/s of data output, which is then stored for posterior analysis. The High Level Trigger process is subdivided into two stages: High Level Trigger 1 (HLT1) and High Level Trigger 2 (HLT2). HLT1 occurs in real-time, and yields a reduction of data of approximately 30:1. HLT1 consists in a series of software processes that reconstruct particle collisions. The HLT1 reconstruction only analyzes the trajectories of particles produced at the collision, solving a problem known as track reconstruction, that determines whether the collision data is kept or discarded. In contrast, HLT2 is a finer process, which requires more time to execute and reconstructs all subdetectors composing LHCb. Towards 2020, the LHCb detector and all the components composing the data acquisition system will be upgraded. As part of the data acquisition system, the servers that process HLT1 and HLT2 will also be upgraded. In addition, the LHC accelerator will also be updated, increasing the data generated in every bunch crossing by roughly 5 times. Due to the accelerator and detector upgrades, the amount of data that the HLT will require to process is expected to increase by 40 times. The foreseen scalability of the software through 2020 underestimated the required resources to face the increase in data throughput. As a consequence, studies of all algorithms composing HLT1 and HLT2 and code modernizations were carried out, in order to obtain a better performance and increase the processing capability of the foreseen hardware resources in the upgrade. In this thesis, several algorithms of the LHCb recontruction are explored. The track reconstruction problem is analyzed in depth, and new algorithms are proposed. Since the analyzed problems are massively parallel, these algorithms are implemented in specialized languages for modern graphics cards (GPUs), due to their inherently parallel architecture. From this work stem two algorithm designs. Furthermore, four additional decoding algorithms and a clustering algorithms have been designed and implemented, which are also part of HLT1. Apart from that, an parallel Kalman filter algorithm has been designed and implemented, which can be used in both HLT stages. The developed algorithms satisfy the requirements of the LHCb collaboration for the LHCb upgrade. In order to execute the algorithms efficiently on GPUs, a software framework specialized for GPUs is developed, which allows executing GPU reconstruction sequences in parallel. Combining the developed algorithms with the framework, an execution sequence is completed as the foundations of a GPU HLT1. During the research carried out in this thesis, the aforementioned developments and a small group of collaborators coordinated by the author lead to the completion of a full GPU HLT1 sequence. The performance obtained on GPUs allows executing a reconstruction sequence in real-time, under LHCb upgrade conditions. The developed GPU HLT1 constitutes the first GPU high level trigger ever developed for an LHC experiment. Finally, various possible realizations of the GPU HLT1 to integrate in a production GPU-equipped data acquisition system are detailed

    A Method of Rendering CSG-Type Solids Using a Hybrid of Conventional Rendering Methods and Ray Tracing Techniques

    Get PDF
    This thesis describes a fast, efficient and innovative algorithm for producing shaded, still images of complex objects, built using constructive solid geometry ( CSG ) techniques. The algorithm uses a hybrid of conventional rendering methods and ray tracing techniques. A description of existing modelling and rendering methods is given in chapters 1, 2 and 3, with emphasis on the data structures and rendering techniques selected for incorporation in the hybrid method. Chapter 4 gives a general description of the hybrid method. This method processes data in the screen coordinate system and generates images in scan-line order. Scan lines are divided into spans (or segments) using the bounding rectangles of primitives calculated in screen coordinates. Conventional rendering methods and ray tracing techniques are used interchangeably along each scan-line. The method used is detennined by the number of primitives associated with a particular span. Conventional rendering methods are used when only one primitive is associated with a span, ray tracing techniques are used for hidden surface removal when two or more primitives are involved. In the latter case each pixel in the span is evaluated by accessing the polygon that is visible within each primitive associated with the span. The depth values (i. e. z-coordinates derived from the 3-dimensional definition) of the polygons involved are deduced for the pixel's position using linear interpolation. These values are used to determine the visible polygon. The CSG tree is accessed from the bottom upwards via an ordered index that enables the 'visible' primitives on any particular scan-line to be efficiently located. Within each primitive an ordered path through the data structure provides the polygons potentially visible on a particular scan-line. Lists of the active primitives and paths to potentially visible polygons are maintained throughout the rendering step and enable span coherence and scan-line coherence to be fully utilised. The results of tests with a range of typical objects and scenes are provided in chapter 5. These results show that the hybrid algorithm is significantly faster than full ray tracing algorithms
    • …
    corecore