361 research outputs found

    Evaluation of low-power architectures in a scientific computing environment

    Get PDF
    HPC (High Performance Computing) represents, together with theory and experiments, the third pillar of science. Through HPC, scientists can simulate phenomena otherwise impossible to study. The need of performing larger and more accurate simulations requires to HPC to improve every day. HPC is constantly looking for new computational platforms that can improve cost and power efficiency. The Mont-Blanc project is a EU funded research project that targets to study new hardware and software solutions that can improve efficiency of HPC systems. The vision of the project is to leverage the fast growing market of mobile devices to develop the next generation supercomputers. In this work we contribute to the objectives of the Mont-Blanc project by evaluating performance of production scientific applications on innovative low power architectures. In order to do so, we describe our experiences porting and evaluating sate of the art scientific applications on the Mont-Blanc prototype, the first HPC system built with commodity low power embedded technology. We then extend our study to compare off-the-shelves ARMv8 platforms. We finally discuss the most impacting issues encountered during the development of the Mont-Blanc prototype system

    Heterogeneity, High Performance Computing, Self-Organization and the Cloud

    Get PDF
    application; blueprints; self-management; self-organisation; resource management; supply chain; big data; PaaS; Saas; HPCaa

    Supercomputing Frontiers

    Get PDF
    This open access book constitutes the refereed proceedings of the 7th Asian Conference Supercomputing Conference, SCFA 2022, which took place in Singapore in March 2022. The 8 full papers presented in this book were carefully reviewed and selected from 21 submissions. They cover a range of topics including file systems, memory hierarchy, HPC cloud platform, container image configuration workflow, large-scale applications, and scheduling

    Accelerating K-mer Frequency Counting with GPU and Non-Volatile Memory

    Get PDF
    The emergence of Next Generation Sequencing (NGS) platforms has increased the throughput of genomic sequencing and in turn the amount of data that needs to be processed, requiring highly efficient computation for its analysis. In this context, modern architectures including accelerators and non-volatile memory are essential to enable the mass exploitation of these bioinformatics workloads. This paper presents a redesign of the main component of a state-of-the-art reference-free method for variant calling, SMUFIN, which has been adapted to make the most of GPUs and NVM devices. SMUFIN relies on counting the frequency of k-mers (substrings of length k) in DNA sequences, which also constitutes a well-known problem for many bioinformatics workloads, such as genome assembly. We propose techniques to improve the efficiency of k-mer counting and to scale-up workloads like SMUFIN that used to require 16 nodes of Marenostrum 3 to a single machine with a GPU and NVM drives. Results show that although the single machine is not able to improve the time to solution of 16 nodes, its CPU time is 7.5x shorter than the aggregate CPU time of the 16 nodes, with a reduction in energy consumption of 5.5x.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). It is also partially supported by the Ministry of Economy of Spain under contract TIN2015-65316-P and Generalitat de Catalunya under contract 2014SGR1051, by the ICREA Academia program, and by the BSC-CNS Severo Ochoa program (SEV-2015-0493). We are also grateful to SandDisk for lending the FusionIO cards and to Nvidia who donated the Tesla K40c.Peer ReviewedPostprint (author's final draft

    ASETS: A SDN Empowered Task Scheduling System for HPCaaS on the Cloud

    Get PDF
    With increasing demands for High Performance Computing (HPC), new ideas and methods are emerged to utilize computing resources more efficiently. Cloud Computing appears to provide benefits such as resource pooling, broad network access and cost efficiency for the HPC applications. However, moving the HPC applications to the cloud can face several key challenges, primarily, the virtualization overhead, multi-tenancy and network latency. Software-Defined Networking (SDN) as an emerging technology appears to pave the road and provide dynamic manipulation of cloud networking such as topology, routing, and bandwidth allocation. This paper presents a new scheme called ASETS which targets dynamic configuration and monitoring of cloud networking using SDN to improve the performance of HPC applications and in particular task scheduling for HPC as a service on the cloud (HPCaaS). Further, SETSA, (SDN-Empowered Task Scheduler Algorithm) is proposed as a novel task scheduling algorithm for the offered ASETS architecture. SETSA monitors the network bandwidth to take advantage of its changes when submitting tasks to the virtual machines. Empirical analysis of the algorithm in different case scenarios show that SETSA has significant potentials to improve the performance of HPCaaS platforms by increasing the bandwidth efficiency and decreasing task turnaround time. In addition, SETSAW, (SETSA Window) is proposed as an improvement of the SETSA algorithm

    The Erasmus Computing Grid – Building a Super-Computer for Free

    Get PDF
    Today advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for new scientific results or clinical diagnostics and treatment. At the Hogeschool Rotterdam and the Erasmus MC there is a massive need for computation power on a scale of 10,000 to 15,000 computers equivalent to ~20 to ~30 Tflops (1012 floating point operations per second) for a variety of work areas ranging from e.g. MRI and CT scan and microscopic image anlysis to DNA sequence analysis, protein and other structural simulations and analysis. Both institutions have already 13,000 computers, i.e. ~18 Tflops of computer power, available! To make the needed computer power accessible, we started to build the Erasmus Computing Grid (ECG), which is connecting local computers in each institution via central management systems. The system guaranties security and any privacy rules through the used software as well as through our set-up and a NAN and ISO certification process being under way. Similar systems run already world-wide on entire institutions including secured environments like government institutions or banks. Currently, the ECG has a computational power of ~5 Tflops and is one of or already the largest desktop grid in the world. At the Hogeschool Rotterdam meanwhile all computers were included in the ECG. Currently, 10 departments with ~15 projects at the Erasmus MC depend on using the ECG and are preparing or prepared their analysis programs or are already in production state. The Erasmus Computing Grid office and an advisory and control board were set-up. To sustain the ECG now further infrastructure measures have to be taken. Central hardware and specialist personal needs to be put in place for capacity, security and usability reasons for the application at Erasmus MC. This is also necessary in respect to NAN and ISO certification towards diagnostic and commercial ECG use, for which there is great need and potential. Beyond the link to the Dutch BigGrid Initiative and the German MediGRID should be prepared for and realized due to the great interest for cooperation. There is also big political interest from the government to relieve the pressure on computational needs in The Netherlands and to strengthen the Dutch position in the field of high performance computing. In both fields the ECG should be brought into a leading position by establishing the Erasmus MC a centre of excellence for high-performance computing in the medical field in respect to Europe and world-wide. Consequently, we successfully started to build a super-computer at the Hogeschool Rotterdam and Erasmus MC with great opportunities for scientific research, clinical diagnostics and research as well as student training. This will put both institutions in the position to play a major world-wide role in high-performance computing. This will open entire new possibilities for both institutions in terms of recognition and new funding possibilities and is of major importance for The Netherlands and the EU
    • …
    corecore