1,632,428 research outputs found

    High performance astrophysics computing

    Full text link
    The application of high end computing to astrophysical problems, mainly in the galactic environment, is under development since many years at the Dep. of Physics of Sapienza Univ. of Roma. The main scientific topic is the physics of self gravitating systems, whose specific subtopics are: i) celestial mechanics and interplanetary probe transfers in the solar system; ii) dynamics of globular clusters and of globular cluster systems in their parent galaxies; iii) nuclear clusters formation and evolution; iv) massive black hole formation and evolution; v) young star cluster early evolution. In this poster we describe the software and hardware computational resources available in our group and how we are developing both software and hardware to reach the scientific aims above itemized.Comment: 2 pages paper presented at the Conference "Advances in Computational Astrophysics: methods, tools and outcomes", to be published in the ASP Conference Series, 2012, vol. 453, R. Capuzzo-Dolcetta, M. Limongi and A. Tornambe' ed

    Quantum Accelerators for High-Performance Computing Systems

    Full text link
    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.Comment: "If you want to go quickly, go alone. If you want to go far, go together.

    High fidelity imaging and high performance computing in nonlinear EIT

    No full text
    We show that nonlinear EIT provides images with well defined characteristics when smoothness of the image is used as a constraint in the reconstruction process. We use the gradient of the logarithm of resistivity as an effective measure of image smoothness, which has the advantage that resistivity and conductivity are treated with equal weight. We suggest that a measure of the fidelity of the image to the object requires the explicit definition and application of such a constraint. The algorithm is applied to the simulation of intra-ventricular haemorrhaging (IVH) in a simple head model. The results indicate that a 5% increase in the blood content of the ventricles would be easily detectable with the noise performance of contemporary instrumentation. The possible implementation of the algorithm in real time via high performance computing is discussed

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS

    High-Throughput Computing on High-Performance Platforms: A Case Study

    Full text link
    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan---a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner

    A Pattern Language for High-Performance Computing Resilience

    Full text link
    High-performance computing systems (HPC) provide powerful capabilities for modeling, simulation, and data analytics for a broad class of computational problems. They enable extreme performance of the order of quadrillion floating-point arithmetic calculations per second by aggregating the power of millions of compute, memory, networking and storage components. With the rapidly growing scale and complexity of HPC systems for achieving even greater performance, ensuring their reliable operation in the face of system degradations and failures is a critical challenge. System fault events often lead the scientific applications to produce incorrect results, or may even cause their untimely termination. The sheer number of components in modern extreme-scale HPC systems and the complex interactions and dependencies among the hardware and software components, the applications, and the physical environment makes the design of practical solutions that support fault resilience a complex undertaking. To manage this complexity, we developed a methodology for designing HPC resilience solutions using design patterns. We codified the well-known techniques for handling faults, errors and failures that have been devised, applied and improved upon over the past three decades in the form of design patterns. In this paper, we present a pattern language to enable a structured approach to the development of HPC resilience solutions. The pattern language reveals the relations among the resilience patterns and provides the means to explore alternative techniques for handling a specific fault model that may have different efficiency and complexity characteristics. Using the pattern language enables the design and implementation of comprehensive resilience solutions as a set of interconnected resilience patterns that can be instantiated across layers of the system stack.Comment: Proceedings of the 22nd European Conference on Pattern Languages of Program

    High Energy Physics from High Performance Computing

    Full text link
    We discuss Quantum Chromodynamics calculations using the lattice regulator. The theory of the strong force is a cornerstone of the Standard Model of particle physics. We present USQCD collaboration results obtained on Argonne National Lab's Intrepid supercomputer that deepen our understanding of these fundamental theories of Nature and provide critical support to frontier particle physics experiments and phenomenology.Comment: Proceedings of invited plenary talk given at SciDAC 2009, San Diego, June 14-18, 2009, on behalf of the USQCD collaboratio
    corecore