164 research outputs found

    JACC: An OpenACC Runtime Framework with Kernel-Level and Multi-GPU Parallelization

    Get PDF
    The rapid development in computing technology has paved the way for directive-based programming models towards a principal role in maintaining software portability of performance-critical applications. Efforts on such models involve a least engineering cost for enabling computational acceleration on multiple architectures while programmers are only required to add meta information upon sequential code. Optimizations for obtaining the best possible efficiency, however, are often challenging. The insertions of directives by the programmer can lead to side-effects that limit the available compiler optimization possible, which could result in performance degradation. This is exacerbated when targeting multi-GPU systems, as pragmas do not automatically adapt to such systems, and require expensive and time consuming code adjustment by programmers. This paper introduces JACC, an OpenACC runtime framework which enables the dynamic extension of OpenACC programs by serving as a transparent layer between the program and the compiler. We add a versatile code-translation method for multi-device utilization by which manually-optimized applications can be distributed automatically while keeping original code structure and parallelism. We show in some cases nearly linear scaling on the part of kernel execution with the NVIDIA V100 GPUs. While adaptively using multi-GPUs, the resulting performance improvements amortize the latency of GPU-to-GPU communications.Comment: Extended version of a paper to appear in: Proceedings of the 28th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC), December 17-18, 202

    ACC Saturator: Automatic Kernel Optimization for Directive-Based GPU Code

    Full text link
    Automatic code optimization is a complex process that typically involves the application of multiple discrete algorithms that modify the program structure irreversibly. However, the design of these algorithms is often monolithic, and they require repetitive implementation to perform similar analyses due to the lack of cooperation. To address this issue, modern optimization techniques, such as equality saturation, allow for exhaustive term rewriting at various levels of inputs, thereby simplifying compiler design. In this paper, we propose equality saturation to optimize sequential codes utilized in directive-based programming for GPUs. Our approach simultaneously realizes less computation, less memory access, and high memory throughput. Our fully-automated framework constructs single-assignment forms from inputs to be entirely rewritten while keeping dependencies and extracts optimal cases. Through practical benchmarks, we demonstrate a significant performance improvement on several compilers. Furthermore, we highlight the advantages of computational reordering and emphasize the significance of memory-access order for modern GPUs

    A symbolic emulator for shuffle synthesis on the NVIDIA PTX code

    Get PDF
    Various kinds of applications take advantage of GPUs through automation tools that attempt to automatically exploit the available performance of the GPU's parallel architecture. Directive-based programming models, such as OpenACC, are one such method that easily enables parallel computing by just adhering code annotations to code loops. Such abstract models, however, often prevent programmers from making additional low-level optimizations to take advantage of the advanced architectural features of GPUs because the actual generated computation is hidden from the application developer. This paper describes and implements a novel flexible optimization technique that operates by inserting a code emulator phase to the tail-end of the compilation pipeline. Our tool emulates the generated code using symbolic analysis by substituting dynamic information and thus allowing for further low-level code optimizations to be applied. We implement our tool to support both CUDA and OpenACC directives as the frontend of the compilation pipeline, thus enabling low-level GPU optimizations for OpenACC that were not previously possible. We demonstrate the capabilities of our tool by automating warp-level shuffle instructions that are difficult to use by even advanced GPU programmers. Lastly, evaluating our tool with a benchmark suite and complex application code, we provide a detailed study to assess the benefits of shuffle instructions across four generations of GPU architectures.We are funded by the EPEEC project from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 801051 and the Ministerio de Ciencia e Innovación-Agencia Estatal de Investigación (PID2019-107255GB-C21/AEI/10.13039/501100011033). This work has been partially carried out on the ACME cluster owned by CIEMAT and funded by the Spanish Ministry of Economy and Competitiveness project CODEC-OSE (RTI2018-096006-B-I00).Peer ReviewedPostprint (published version

    Multi-GPU design and performance evaluation of homomorphic encryption on GPU clusters

    Get PDF
    We present a multi-GPU design, implementation and performance evaluation of the Halevi-Polyakov-Shoup (HPS) variant of the Fan-Vercauteren (FV) levelled Fully Homomorphic Encryption (FHE) scheme. Our design follows a data parallelism approach and uses partitioning methods to distribute the workload in FV primitives evenly across available GPUs. The design is put to address space and runtime requirements of FHE computations. It is also suitable for distributed-memory architectures, and includes efficient GPU-to-GPU data exchange protocols. Moreover, it is user-friendly as user intervention is not required for task decomposition, scheduling or load balancing. We implement and evaluate the performance of our design on two homogeneous and heterogeneous NVIDIA GPU clusters: K80, and a customized P100. We also provide a comparison with a recent shared-memory-based multi-core CPU implementation using two homomorphic circuits as workloads: vector addition and multiplication. Moreover, we use our multi-GPU Levelled-FHE to implement the inference circuit of two Convolutional Neural Networks (CNNs) to perform homomorphically image classification on encrypted images from the MNIST and CIFAR - 10 datasets. Our implementation provides 1 to 3 orders of magnitude speedup compared with the CPU implementation on vector operations. In terms of scalability, our design shows reasonable scalability curves when the GPUs are fully connected.This work is supported by A*STAR under its RIE2020 Advanced Manufacturing and Engineering (AME) Programmtic Programme (Award A19E3b0099).Peer ReviewedPostprint (author's final draft

    Cation-Disordered Li3VO4: Reversible Li Insertion/Deinsertion Mechanism for Quasi Li-Rich Layered Li1+x[V1/2Li1/2]O2 (x = 0–1)

    Get PDF
    The reversible lithiation/delithiation mechanism of the cation-disordered Li3VO4 material was elucidated, including the understanding of structural and electrochemical signature changes during cycling. The initial exchange of two Li induces a progressive and irreversible migration of Li and V ions from tetrahedral to octahedral sites, confirmed by the combination of in situ/operando X-ray diffraction and X-ray absorption fine structure analyses. The resulting cation-disordered Li3VO4 can smoothly and reversibly accommodate two Li and shows a Li+ diffusion coefficient larger by 2 orders of magnitude than the one of pristine Li3VO4, leading to improved electrochemical performance. This cation-disordered Li3VO4 negative electrode offers new opportunities for designing high-energy and high-power supercapacitors. Furthermore, it opens new paths for preparing disordered compounds with the general hexagonal close-packing structure, including most polyanionic compounds, whose electrochemical performance can be easily improved by simple cation mixing

    Effect of Heart Failure on Long‐Term Clinical Outcomes After Percutaneous Coronary Intervention Versus Coronary Artery Bypass Grafting in Patients With Severe Coronary Artery Disease

    Get PDF
    [Background] Heart failure might be an important determinant in choosing coronary revascularization modalities. There was no previous study evaluating the effect of heart failure on long‐term clinical outcomes after percutaneous coronary intervention (PCI) relative to coronary artery bypass grafting (CABG). [Methods and Results] Among 14 867 consecutive patients undergoing first coronary revascularization with PCI or isolated CABG between January 2011 and December 2013 in the CREDO‐Kyoto PCI/CABG registry Cohort‐3, we identified the current study population of 3380 patients with three‐vessel or left main coronary artery disease, and compared clinical outcomes between PCI and CABG stratified by the subgroup based on the status of heart failure. There were 827 patients with heart failure (PCI: N=511, and CABG: N=316), and 2553 patients without heart failure (PCI: N=1619, and CABG: N=934). In patients with heart failure, the PCI group compared with the CABG group more often had advanced age, severe frailty, acute and severe heart failure, and elevated inflammatory markers. During a median 5.9 years of follow‐up, there was a significant interaction between heart failure and the mortality risk of PCI relative to CABG (interaction P=0.009), with excess mortality risk of PCI relative to CABG in patients with heart failure (HR, 1.75; 95% CI, 1.28–2.42; P<0.001) and no excess mortality risk in patients without heart failure (HR, 1.04; 95% CI, 0.80–1.34; P=0.77). [Conclusions] There was a significant interaction between heart failure and the mortality risk of PCI relative to CABG with excess risk in patients with heart failure and neutral risk in patients without heart failure

    Percutaneous coronary intervention using new-generation drug-eluting stents versus coronary arterial bypass grafting in stable patients with multi-vessel coronary artery disease: From the CREDO-Kyoto PCI/CABG registry Cohort-3

    Get PDF
    AIMS: There is a scarcity of studies comparing percutaneous coronary intervention (PCI) using new-generation drug-eluting stents (DES) with coronary artery bypass grafting (CABG) in patients with multi-vessel coronary artery disease. METHODS AND RESULTS: The CREDO-Kyoto PCI/CABG registry Cohort-3 enrolled 14927 consecutive patients who underwent first coronary revascularization with PCI or isolated CABG between January 2011 and December 2013. The current study population consisted of 2464 patients who underwent multi-vessel coronary revascularization including revascularization of left anterior descending coronary artery (LAD) either with PCI using new-generation DES (N = 1565), or with CABG (N = 899). Patients in the PCI group were older and more often had severe frailty, but had less complex coronary anatomy, and less complete revascularization than those in the CABG group. Cumulative 5-year incidence of a composite of all-cause death, myocardial infarction or stroke was not significantly different between the 2 groups (25.0% versus 21.5%, P = 0.15). However, after adjusting confounders, the excess risk of PCI relative to CABG turned to be significant for the composite endpoint (HR 1.27, 95%CI 1.04-1.55, P = 0.02). PCI as compared with CABG was associated with comparable adjusted risk for all-cause death (HR 1.22, 95%CI 0.96-1.55, P = 0.11), and stroke (HR 1.17, 95%CI 0.79-1.73, P = 0.44), but with excess adjusted risk for myocardial infarction (HR 1.58, 95%CI 1.05-2.39, P = 0.03), and any coronary revascularization (HR 2.66, 95%CI 2.06-3.43, P<0.0001). CONCLUSIONS: In this observational study, PCI with new-generation DES as compared with CABG was associated with excess long-term risk for major cardiovascular events in patients who underwent multi-vessel coronary revascularization including LAD

    Advancing the state of the art of directive-based programming for GPUs: runtime and compilation

    Get PDF
    Tesi amb menció de Doctorat Internacional(English) The rapid development in computing technology has paved the way for directive-based programming models towards a principal role in maintaining software portability of performance-critical applications. Efforts on such models involve a least engineering cost for enabling computational acceleration on multiple architectures, while programmers are only required to add meta information upon sequential code. Optimizations for obtaining the best possible efficiency, however, are often challenging. The insertions of directives by the programmer can lead to side-effects that limit the available compiler optimization possible, which could result in performance degradation. This is exacerbated when targeting asynchronous execution or multi-GPU systems, as pragmas do not automatically adapt to such mechanisms, and require expensive and time consuming code adjustment by programmers. Moreover, directive-based programming models such as OpenACC and OpenMP often prevent programmers from making additional optimizations to take advantage of the advanced architectural features of GPUs because the actual generated computation is hidden from the application developer. This dissertation explores new possibilities for optimizing directive-based code from both runtime and compilation perspectives. First, we introduce a runtime framework for OpenACC to facilitate dynamic analysis and compilation. Especially, our framework realizes automatic asynchronous execution and multi-GPU use based on the status of kernel execution and data availability while taking advantage of an on-the-fly mechanism for compilation and program optimization. We add a versatile code-translation method for multi-device utilization by which manually-optimized applications can be distributed automatically while keeping original code structure and parallelism. Second, we implement a novel flexible optimization technique that operates by inserting a code emulator phase to the tail-end of the compilation pipeline. Our tool emulates the generated code using symbolic analysis by substituting dynamic information and thus allowing for further low-level code optimizations to be applied. We implement our tool to support both CUDA and OpenACC directives as the frontend of the compilation pipeline, thus enabling low-level GPU optimizations for OpenACC that were not previously possible. Third, we propose the use of a modern optimization technique, equality saturation, to optimize sequential code utilized in directive-based programming for GPUs. Our approach realizes less computation, less memory access, and high memory throughput simultaneously. Our fully-automated framework constructs single-assignment forms from inputs to be entirely rewritten while keeping dependencies and extracts optimal cases. Overall, we cover runtime techniques and optimization methods based on dynamic information, low-level operations, and user-level opportunities. We evaluate our proposals on the state-of-the-art GPUs and provide detailed analysis for each technique. For multi-GPU use, we show in some cases nearly linear scaling on the part of kernel execution with the NVIDIA V100 GPUs. While adaptively using multi-GPUs, the resulting performance improvements amortize the latency of GPU-to-GPU communications. Regarding low-level optimization, we demonstrate the capabilities of our tool by automating warp-level shuffle instructions that are difficult to use by even advanced GPU programmers. While evaluating our tool with a benchmark suite and complex application code, we provide a detailed study to assess the benefits of shuffle instructions across four generations of GPU architectures. Lastly, with sequential code optimization, we demonstrate a significant performance improvement on several compilers through practical benchmarks. Then, we highlight the advantages of computational reordering and emphasize the significance of memory-access order for modern GPUs.(Català) El desenvolupament ràpid de la tecnologia informàtica ha aplanat el camí perquè els models de programació basats en directives exerceixin un paper principal en el manteniment de la portabilitat del programari d'aplicacions de rendiment crític. computacional en múltiples arquitectures, mentre que als programadors només cal afegir metainformació al codi seqüencial. Tot i això, les optimitzacions per obtenir la millor eficiència possible solen ser un desafiament. La inserció de directives per part del programador pot provocar efectes secundaris que limitin la possible optimització disponible del compilador, cosa que podria provocar una degradació del rendiment. Això s'agrava quan s'apunta a l'execució asincrònica o sistemes multi-GPU, ja que els pragmes no s'adapten automàticament a aquests mecanismes i requereixen ajustaments de codi costosos i que requereixen molt de temps per part dels programadors. A més, els models de programació basats en directives com OpenACC i OpenMP sovint impedeixen que els programadors facin optimitzacions addicionals per aprofitar les característiques arquitectòniques avances de les GPU perquè el càlcul real generat és ocult per al desenvolupador de l'aplicació. Aquesta dissertació explora noves possibilitats per optimitzar el codi basat en directives tant des de la perspectiva del temps d’execució com de la compilació. Primer presentem un marc d'execució per a OpenACC per facilitar l'anàlisi i la compilació dinàmica. Especialment, el nostre marc realitza una execució asincrònica automàtica i un ús de múltiples GPU segons l'estat d'execució del nucli i la disponibilitat de dades, mentre aprofita un mecanisme sobre la marxa per a la compilació i l'optimització del programa. Afegim un mètode versàtil de traducció de codi per a la utilització de múltiples dispositius mitjançant la qual les aplicacions optimitzades manualment es poden distribuir automàticament mantenint l'estructura i el paral·lelisme del codi original. En segon lloc, implementem una nova tècnica d'optimització flexible que opera inserint una fase d'emulador de codi al final del procés de compilació. La nostra eina emula el codi generat mitjançant anàlisi simbòlica a substituir informació dinàmica i així permetre que s'apliquin més optimitzacions de codi de baix nivell. Implementem la nostra eina per admetre les directives CUDA iOpenACC com a interfície del procés de compilació, cosa que permet optimitzacions de GPU de baix nivell per a OpenACC que abans no eren possibles. En tercer lloc, us proposem l'ús d'una tècnica d'optimització moderna, l’aturació d'igualtat, per optimitzar el codi seqüencial utilitzat a la programació basada en directives per a GPU. El nostre enfocament aconsegueix menys computació, menys accés a la memòria i un alt rendiment de la memòria simultàniament. El nostre marc totalment automatitzat construeix la forma SSA a partir d'entrades que es reescriuran completament mantenint les dependències i extraient casos òptims. En general, cobrim tècniques de temps d'execució i mètodes de optimització basats en informació dinàmica, operacions de baix nivell i oportunitats a nivell d'usuari.(Español) El rápido desarrollo de la tecnología informática ha allanado el camino para que los modelos de programación basados en directivas desempeñen un papel principal en el mantenimiento de la portabilidad del software de aplicaciones de rendimiento crítico. Los esfuerzos en tales modelos implican un costo de ingeniería mínimo para permitir la aceleración computacional en múltiples arquitecturas, mientras que a los programadores solo se les requiere agregar metainformación al código secuencial. Sin embargo, las optimizaciones para obtener la mejor eficiencia posible suelen ser un desafío. La inserción de directivas por parte del programador puede provocar efectos secundarios que limiten la posible optimización disponible del compilador, lo que podría provocar una degradación del rendimiento. Esto se agrava cuando se apunta a la ejecución asincrónica o a sistemas multi-GPU, ya que los pragmas no se adaptan automáticamente a tales mecanismos y requieren ajustes de código costosos y que requieren mucho tiempo por parte de los programadores. Además, los modelos de programación basados en directivas como OpenACC y OpenMP a menudo impiden que los programadores realicen optimizaciones adicionales para aprovechar las características arquitectónicas avanzadas de las GPU porque el cálculo real generado está oculto para el desarrollador de la aplicación. Esta disertación explora nuevas posibilidades para optimizar el código basado en directivas tanto desde la perspectiva del tiempo de ejecución como de la compilación. Primero, presentamos un marco de ejecución para OpenACC para facilitar el análisis y la compilación dinámicos. Especialmente, nuestro marco realiza una ejecución asincrónica automática y un uso de múltiples GPU según el estado de ejecución del kernel y la disponibilidad de datos, mientras aprovecha un mecanismo sobre la marcha para la compilación y optimización del programa. Agregamos un método versátil de traducción de código para la utilización de múltiples dispositivos mediante el cual las aplicaciones optimizadas manualmente se pueden distribuir automáticamente manteniendo la estructura y el paralelismo del código original. En segundo lugar, implementamos una novedosa técnica de optimización flexible que opera insertando una fase de emulador de código al final del proceso de compilación. Nuestra herramienta emula el código generado mediante análisis simbólico al sustituir información dinámica y así permitir que se apliquen más optimizaciones de código de bajo nivel. Implementamos nuestra herramienta para admitir las directivas CUDA y OpenACC como interfaz del proceso de compilación, lo que permite optimizaciones de GPU de bajo nivel para OpenACC que antes no eran posibles. En tercer lugar, proponemos el uso de una técnica de optimización moderna, la saturación de igualdad, para optimizar el código secuencial utilizado en la programación basada en directivas para GPU. Nuestro enfoque logra menos computación, menos acceso a la memoria y un alto rendimiento de la memoria simultáneamente. Nuestro marco totalmente automatizado construye la forma SSA a partir de entradas que se reescribirán por completo manteniendo las dependencias y extrayendo casos óptimos. En general, cubrimos técnicas de tiempo de ejecución y métodos de optimización basados en información dinámica, operaciones de bajo nivel y oportunidades a nivel de usuario.DOCTORAT EN ARQUITECTURA DE COMPUTADORS (Pla 2012
    corecore