36 research outputs found

    Performance tuning of applications for HPC systems employing Simulated Annealing optimization

    No full text
    Building fast software in an HPC environment raises great challenges as software used for simulation and modelling is generally complex and has many dependencies. Current approaches involve manual tuning of compilation parameters in order to minimize the run time, based on a set of predefined defaults, but such an approach involves expert knowledge, is not scalable and can be very expensive in person-hours. In this paper we propose and develop a modular framework called POHPC that uses the Simulated Annealing meta-heuristic algorithm to automatically search for the optimal set of library options and compilation flags that can give the best execution time for a library-application pair on a selected hardware architecture. The framework can be used in modern HPC clusters using a variety of batch scheduling systems as execution backends for the optimization runs, and will discover optimal combinations as well as invalid sets of options and flags that result in failed builds or application crashes. We demonstrate the optimization of the FFTW library working in conjunction with the high- profile community codes GROMACS and QuantumESPRESSO, whereby the suitability of the technique is validated

    Performance Analysis of Cloud Environments on Top of Energy-Efficient Platforms Featuring Low Power Processors

    No full text
    Energy efficiency remains a prevalent concern in the development of future HPC systems. Thus the next generations of supercomputers are foreseen to be developed as hybrid systems featuring traditional processors, accelerators (such as GPGPUs) and/or low-power processor architectures (ARM, Intel Atom, etc.) primarily designed for the mobile and embedded de- vices market. Also, a confluence with the Cloud Computing (CC) paradigm is anticipated, driven by economic sustainability factors. However, the performance impact of running Cloud middleware on such crossbred platforms remains to be explored, especially on low power processors. In this context, this paper brings two main contributions: (1) the design and implementation of BACH, a framework able to execute automated performance evaluations of Cloud and HPC cluster environments; (2) the concrete validation of the framework for the evaluation of the modern OpenStack Infrastructure-as-a-Service (IaaS) middleware, deployed on a cutting-edge cluster based on ultra low power energy efficient ARM processors. The efficiency in itself is measured with synthetic HPC benchmarks: HPCC (incorporating the well known HPL), HPCG and real world applications from the bioinformatics domain - GROMACS and ABySS. The experimental evaluation revealed an average 24% performance drop in performance for compute-intensive tasks and 65.6% drop in communication capacity compared to the native environment without the IaaS solution, showing a non-negligible impact on the tested platform. To our knowledge, this is one of the first studies of this type, since deployment attempts of the OpenStack infrastructure on top of ARM platforms are in early stages, and are generally performed only for demonstration purposes

    Evaluating the HPC Performance and Energy-Efficiency of Intel and ARM-based systems with synthetic and bioinformatics workloads

    No full text
    The increasing demand for High Performance Computing (HPC) paired with the higher power requirements of the ever-faster systems has led to the search for both performant and more energy-efficient architectures. This article compares and contrasts the performance and energy efficiency of two modern, traditional Intel Xeon and low power ARM-based clusters, which are tested with the recently developed High Performance Conjugate Gradient (HPCG) benchmark and the ABySS, FASTA and MrBayes bioinformatics applications. We show a higher Performance per Watt valuation of the ARM cluster, and lower energy usage during the tests, which does not offset the much faster job completion rate obtained by the Intel cluster, making the latter more suitable for the considered workloads given the disparity in the performance results

    Amazon Elastic Compute Cloud (EC2) versus In-House HPC Platform: A Cost Analysis

    No full text
    Abstract—While High Performance Computing (HPC) centers continuously evolve to provide more computing power to their users, we observe a wish for the convergence between Cloud Computing (CC) and High Performance Computing (HPC) platforms, with the commercial hope to see Cloud Computing (CC) infrastructures to eventually replace in-house facilities. If we exclude the performance point of view where many previous studies highlight a non-negligible overhead induced by the virtualization layer at the heart of every Cloud middleware when running a HPC workload, the question of the real cost-effectiveness is often left aside with the intuition that, most probably, the instances offered by the Cloud providers are competitive from a cost point of view. In this article, we wanted to assert (or infirm) this intuition by analyzing what composes the Total Cost of Ownership (TCO) of an in-house HPC facility operated internally since 2007. This Total Cost of Ownership (TCO) model is then used to compare with the induced cost that would have been required to run the same platform (and the same workload) over a competitive Cloud IaaS offer. Our approach to address this price comparison is three-fold. First we propose a theoretical price-performance model based on the study of the actual Cloud instances proposed by one of the major Cloud IaaS actors: Amazon Elastic Compute Cloud (EC2). Then, based on the HPC facility TCO analysis we propose a hourly price comparison between our in-house cluster and the equivalent EC2 instances. Finally, based on the experimental benchmarking on the local cluster and on the Cloud instances we propose an update of the former theoretical price model to reflect the real system performance. The results obtained advocate in general for the acquisition of an in-house HPC facility, which balances the common intuition in favor of Cloud Computing platforms, would they be provided by the reference Cloud provider worldwide

    Performance Analysis of Distributed and Scalable Deep Learning

    No full text
    With renewed global interest for Artificial Intelligence (AI) methods, the past decade has seen a myriad of new programming models and tools that enable better and faster Machine Learning (ML). More recently, a subset of ML known as Deep Learning (DL) raised an increased interest due to its inherent ability to tackle efficiently novel cognitive computing applications. DL allows computational models that are composed of multiple processing layers to learn in an automated way representations of data with multiple levels of abstraction, and can deliver higher predictive accuracy when trained on larger data sets. Based on Artificial Neural Networks (ANN), DL is now at the core of state of the art voice recognition systems (which enable easy control over e.g. Internet-of- Things (IoT) smart home appliances for instance), self-driving car engine, online recommendation systems. The ecosystem of DL frameworks is fast evolving, as well as the DL architectures that are shown to perform well on specialized tasks and to exploit GPU accelerators. For this reason, the frequent performance evaluation of the DL ecosystem is re- quired, especially since the advent of novel distributed training frameworks such as Horovod allowing for scalable training across multiple computing resources. In this paper, the scalability evaluation of the reference DL frameworks (Tensorflow, Keras, MXNet, and PyTorch) is performed over up-to-date High Performance Comput- ing (HPC) resources to compare the efficiency of differ- ent implementations across several hardware architectures (CPU and GPU). Experimental results demonstrate that the DistributedDataParallel features in the Pytorch library seem to be the most efficient framework for distributing the training process across many devices, allowing to reach a throughput speedup of 10.11 when using 12 NVidia Tesla V100 GPUs when training Resnet44 on the CIFAR10 dataset

    HPC Performance and Energy-Efficiency of Xen, KVM and VMware Hypervisors

    No full text
    With a growing concern on the considerable energy consumed by HPC platforms and data centers, research efforts are targeting green approaches with higher energy efficiency. In particular, virtualization is emerging as the prominent approach to mutualize the energy consumed by a single server running multiple VMs instances. Even today, it remains unclear whether the overhead induced by virtualization and the corresponding hypervisor middleware suits an environment as high-demanding as an HPC platform. In this paper, we analyze from an HPC perspective the three most widespread virtualization frameworks, namely Xen, KVM, and VMware ESXi and compare them with a baseline environment running in native mode. We performed our experiments on the Grid’5000 platform by measuring the results of the reference HPL benchmark. Power measures were also performed in parallel to quantify the potential energy efficiency of the virtualized environments. In general, our study offers novel incentives toward in-house HPC platforms running without any virtualized frameworks
    corecore