39 research outputs found

    Computing the fast Fourier transform on SIMD microprocessors

    Get PDF
    This thesis describes how to compute the fast Fourier transform (FFT) of a power-of-two length signal on single-instruction, multiple-data (SIMD) microprocessors faster than or very close to the speed of state of the art libraries such as FFTW (“Fastest Fourier Transform in the West”), SPIRAL and Intel Integrated Performance Primitives (IPP). The conjugate-pair algorithm has advantages in terms of memory bandwidth, and three implementations of this algorithm, which incorporate latency and spatial locality optimizations, are automatically vectorized at the algorithm level of abstraction. Performance results on 2- way, 4-way and 8-way SIMD machines show that the performance scales much better than FFTW or SPIRAL. The implementations presented in this thesis are compiled into a high-performance FFT library called SFFT (“Streaming Fast Fourier Trans- form”), and benchmarked against FFTW, SPIRAL, Intel IPP and Apple Accelerate on sixteen x86 machines and two ARM NEON machines, and shown to be, in many cases, faster than these state of the art libraries, but without having to perform extensive machine specific calibration, thus demonstrating that there are good heuristics for predicting the performance of the FFT on SIMD microprocessors (i.e., the need for empirical optimization may be overstated)

    On binaural spatialization and the use of GPGPU for audio processing

    Get PDF
    3D recordings and audio, namely techniques that aim to create the perception of sound sources placed anywhere in 3 dimensional space, are becoming an interesting resource for composers, live performances and augmented reality. This thesis focuses on binaural spatialization techniques. We will tackle the problem from three different perspectives. The first one is related to the implementation of an engine for audio convolution, this is a real implementation problem where we will confront with a number of already available systems trying to achieve better results in terms of performances. General Purpose computing on Graphic Processing Units (GPGPU) is a promising approach to problems where a high parallelization of tasks is desirable. In this thesis the GPGPU approach is applied to both offline and real-time convolution having in mind the spatialization of multiple sound sources which is one of the critical problems in the field. Comparisons between this approach and typical CPU implementations are presented as well as between FFT and time domain approaches. The second aspect is related to the implementation of an augmented reality system having in mind an “off the shelf” system available to most home computers without the need of specialized hardware. A system capable of detecting the position of the listener through a head-tracking system and rendering a 3D audio environment by binaural spatialization is presented. Head tracking is performed through face tracking algorithms that use a standard webcam, and the result is presented over headphones, like in other typical binaural applications. With this system users can choose audio files to play, provide virtual positions for sources in an Euclidean space, and then listen as if they are coming from that position. If users move their head, the signals provided by the system change accordingly in real-time, thus providing the realistic effect of a coherent scene. The last aspect covered by this work is within the field of psychoacoustic, long term research where we are interested in understanding how binaural audio and recordings are perceived and how then auralization systems can be efficiently designed. Considerations with regard to the quality and the realism of such sounds in the context of ASA (Auditory Scene Analysis) are propose

    Efficient algorithms for the fast computation of space charge effects caused by charged particles in particle accelerators

    Get PDF
    In this dissertation, a Poisson solver is improved with three parts: the efficient integrated Green's function; the discrete cosine transform of the efficient integrated Green's function values; the implicitly zero-padded fast Fourier transform for charge density. In addition, the high performance computing technology is utilized for the further improvement of efficiency, such as: OpenMP API, OpenMP+CUDA, MPI, and MPI+OpenMP parallelizations. The examples and simulation results are matched with the results of the commonly used Poisson solver to demonstrate the accuracy performance

    Language and compiler for algorithmic choice

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 55-60).It is often impossible to obtain a one-size-fits-all solution for high performance algorithms when considering different choices for data distributions, parallelism, transformations, and blocking. The best solution to these choices is often tightly coupled to different architectures, problem sizes, data, and available system resources. In some cases, completely different algorithms may provide the best performance. Current compiler and programming language techniques are able to change some of these parameters, but today there is no simple way for the programmer to express or the compiler to choose different algorithms to handle different parts of the data. Existing solutions normally can handle only coarse-grained, library level selections or hand coded cutoffs between base cases and recursive cases. We present PetaBricks, a new implicitly parallel language and compiler where having multiple implementations of multiple algorithms to solve a problem is the natural way of programming. We make algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The PetaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking.by Jason Ansel.S.M

    ON BINAURAL SPATIALIZATION AND THE USE OF GPGPU FOR AUDIO PROCESSING

    Get PDF
    3D recordings and audio, namely techniques that aim to create the perception of sound sources placed anywhere in 3 dimensional space, are becoming an interesting resource for composers, live performances and augmented reality. This thesis focuses on binaural spatialization techniques. We will tackle the problem from three different perspectives. The first one is related to the implementation of an engine for audio convolution, this is a real implementation problem where we will confront with a number of already available systems trying to achieve better results in terms of performances. General Purpose computing on Graphic Processing Units (GPGPU) is a promising approach to problems where a high parallelization of tasks is desirable. In this thesis the GPGPU approach is applied to both offline and real-time convolution having in mind the spatialization of multiple sound sources which is one of the critical problems in the field. Comparisons between this approach and typical CPU implementations are presented as well as between FFT and time domain approaches. The second aspect is related to the implementation of an augmented reality system having in mind an ``off the shelf'' system available to most home computers without the need of specialized hardware. A system capable of detecting the position of the listener through a head-tracking system and rendering a 3D audio environment by binaural spatialization is presented. Head tracking is performed through face tracking algorithms that use a standard webcam, and the result is presented over headphones, like in other typical binaural applications. With this system users can choose audio files to play, provide virtual positions for sources in an Euclidean space, and then listen as if they are coming from that position. If users move their head, the signals provided by the system change accordingly in real-time, thus providing the realistic effect of a coherent scene. The last aspect covered by this work is within the field of psychoacoustic, long term research where we are interested in understanding how binaural audio and recordings are perceived and how then auralization systems can be efficiently designed. Considerations with regard to the quality and the realism of such sounds in the context of ASA (Auditory Scene Analysis) are proposed

    Accelerating incoherent dedispersion

    Full text link
    Incoherent dedispersion is a computationally intensive problem that appears frequently in pulsar and transient astronomy. For current and future transient pipelines, dedispersion can dominate the total execution time, meaning its computational speed acts as a constraint on the quality and quantity of science results. It is thus critical that the algorithm be able to take advantage of trends in commodity computing hardware. With this goal in mind, we present analysis of the 'direct', 'tree' and 'sub-band' dedispersion algorithms with respect to their potential for efficient execution on modern graphics processing units (GPUs). We find all three to be excellent candidates, and proceed to describe implementations in C for CUDA using insight gained from the analysis. Using recent CPU and GPU hardware, the transition to the GPU provides a speed-up of 9x for the direct algorithm when compared to an optimised quad-core CPU code. For realistic recent survey parameters, these speeds are high enough that further optimisation is unnecessary to achieve real-time processing. Where further speed-ups are desirable, we find that the tree and sub-band algorithms are able to provide 3-7x better performance at the cost of certain smearing, memory consumption and development time trade-offs. We finish with a discussion of the implications of these results for future transient surveys. Our GPU dedispersion code is publicly available as a C library at: http://dedisp.googlecode.com/Comment: 15 pages, 4 figures, 2 tables, accepted for publication in MNRA

    Efficient computation of CPW2000 using a CPU-GPU heterogeneous platform

    Get PDF
    Dissertação de mestrado em Engenharia de InformáticaThe modelling and simulation of complex systems in natural science usually require powerfull and expensive computational resources. The study of the plane wave properties in crystals, based on quantum mechanichs pose challenging questions to computer scientists to improve the e ciency of the numerical methods and algorithms. Numerical libraries had a signi cant boost in recent years, taking advantage of multi-threaded environments. This dissertation work addresses e ciency improvements in a plane wave package, CPW2000, developed by a physicist scientist, targeted to a heterogeneous platform with multicore CPU and CUDA enabled GPU devices. The performance botlenecks were previously identifed as being the module functions with FFT computations, and the study started with the application analysis and pro ling. This study shows that (i)over 90% of the code execution time was spent in two functions, DGEMM and FFT, (ii) code ef- ciency of current numerical libraries is hard to improve, and (iii) DGEMM function calls were spread in the code, while FFT was concentrated in a single function. These features were adequately explored to develop a new code version where parts of the code are computed on a multicore CPU with others taking advantage of the GPU multistreaming and parallel computing power. Experimental results show that CPU-GPU combined solutions o er near 10x speedup on the program routines that we proposed to improve, giving us a promising future work.A modelação e simulação de sistemas complexos em áreas científicas geralmente necessita de enormes e dispendiosos recursos computacionais de processamento. O estudo das propriedades de cristais em ondas planas, com base na mecânica quântica, oferece alguns desafios aos cientistas da computação para melhorar a eficiência dos métodos numéricos e algoritmos. As bibliotecas numéricas evoluíram muito tirando vantagem de ambientes multi-threading de computação. O trabalho apresentado nesta dissertação baseia-se na melhoria da eficiência de um programa de ondas planas, o CPW2000, desenvolvido por um investigador da área da física, orientado para uma plataforma heterogénea de computação com um CPU multicore e um GPU com suporte à plataforma CUDA. As principais causas da deterioração da eficiência foram identificadas no módulo que contêm os cálculos de FFT, e o estudo começou com a análise dos tempos de execução de cada componente da aplicação. Este estudo mostra que (i) mais de 90% do tempo total de computação é dividido por duas funções, DGEMM e FFT, (ii) é difícil de melhorar a eficiência das bibliotecas numéricas atuais, e (iii) que as funções DGEMM estão distribuídas pela aplicação enquanto as funções FFT estão concentradas numa função. Estas características foram devidamente exploradas de forma a desenvolver código em que partes deste executa num CPU multicore e outras aproveitam o paralelismo e multistreaming presente nos GPU. Resultados experimentais mostram que as soluções combinadas de CPU-GPU oferecem uma melhoria de aproximadamente 10x nas funções que nos propusemos a melhorar a eficiência, culminando num trabalho futuro promissor

    OPTIMIZATION OF ALGORITHMS WITH THE OPAL FRAMEWORK

    Get PDF
    RÉSUMÉ La question d'identifier de bons paramètres a été étudiée depuis longtemps et on peut compter un grand nombre de recherches qui se concentrent sur ce sujet. Certaines de ces recherches manquent de généralité et surtout de re-utilisabilité. Une première raison est que ces projets visent des systèmes spécifiques. En plus, la plupart de ces projets ne se concentrent pas sur les questions fondamentales de l'identification de bons paramètres. Et enfin, il n'y avait pas un outil puissant capable de surmonter des difficulté dans ce domaine. En conséquence, malgré un grand nombre de projets, les utilisateurs n'ont pas trop de possibilité à appliquer les résultats antérieurs à leurs problèmes. Cette thèse propose le cadre OPAL pour identifier de bons paramètres algorithmiques avec des éléments essentiels, indispensables. Les étapes de l'élaboration du cadre de travail ainsi que les résultats principaux sont présentés dans trois articles correspondant aux trois chapitres 4, 5 et 6 de la thèse. Le premier article introduit le cadre par l'intermédiaire d'exemples fondamentaux. En outre, dans ce cadre, la question d'identifier de bons paramètres est modélisée comme un problème d'optimisation non-lisse qui est ensuite résolu par un algorithme de recherche directe sur treillis adaptatifs. Cela réduit l'effort des utilisateurs pour accomplir la tâche d'identifier de bons paramètres. Le deuxième article décrit une extension visant à améliorer la performance du cadre OPAL. L'utilisation efficace de ressources informatiques dans ce cadre se fait par l'étude de plusieurs stratégies d'utilisation du parallélisme et par l'intermédiaire d'une fonctionnalité particulière appelée l'interruption des tâches inutiles. Le troisième article est une description complète du cadre et de son implémentation en Python. En plus de rappeler les caractéristiques principales présentées dans des travaux antérieurs, l'intégration est présentée comme une nouvelle fonctionnalité par une démonstration de la coopération avec un outil de classification. Plus précisément, le travail illustre une coopération de OPAL et un outil de classification pour résoudre un problème d'optimisation des paramètres dont l'ensemble de problèmes tests est trop grand et une seule évaluation peut prendre une journée.----------ABSTRACT The task of parameter tuning question has been around for a long time, spread over most domains and there have been many attempts to address it. Research on this question often lacks in generality and re-utilisability. A first reason is that these projects aim at specific systems. Moreover, some approaches do not concentrate on the fundamental questions of parameter tuning. And finally, there was not a powerful tool that is able to take over the difficulties in this domain. As a result, the number of projects continues to grow, while users are not able to apply the previous achievements to their own problem. The present work systematically approaches parameter tuning by figuring out the fundamental issues and identifying the basic elements for a general system. This provides the base for developing a general and flexible framework called OPAL, which stands for OPtimization of ALgorithms. The milestones in developing the framework as well as the main achievements are presented through three papers corresponding to the three chapters 4, 5 and 6 of this thesis. The first paper introduces the framework by describing the crucial basic elements through some very simple examples. To this end, the paper considers three questions in constructing an automated parameter tuning framework. By answering these questions, we propose OPAL, consisting of indispensable components of a parameter tuning framework. OPAL models the parameter tuning task as a blackbox optimization problem. This reduces the effort of users in launching a tuning session. The second paper shows one of the opportunities to extend the framework. To take advantage of the situations where multiple processors are available, we study various ways of embedding parallelism and develop a feature called ''interruption of unnecessary tasks'' in order to improve performance of the framework. The third paper is a full description of the framework and a release of its Python} implementation. In addition to the confirmations on the methodology and the main features presented in previous works, the integrability is introduced as a new feature of this release through an example of the cooperation with a classification tool. More specifically, the work illustrates a cooperation of OPAL and a classification tool to solve a parameter optimization problem of which the test problem set is too large and an assessment can take a day
    corecore