56,298 research outputs found
Lessons Learned from a Decade of Providing Interactive, On-Demand High Performance Computing to Scientists and Engineers
For decades, the use of HPC systems was limited to those in the physical
sciences who had mastered their domain in conjunction with a deep understanding
of HPC architectures and algorithms. During these same decades, consumer
computing device advances produced tablets and smartphones that allow millions
of children to interactively develop and share code projects across the globe.
As the HPC community faces the challenges associated with guiding researchers
from disciplines using high productivity interactive tools to effective use of
HPC systems, it seems appropriate to revisit the assumptions surrounding the
necessary skills required for access to large computational systems. For over a
decade, MIT Lincoln Laboratory has been supporting interactive, on-demand high
performance computing by seamlessly integrating familiar high productivity
tools to provide users with an increased number of design turns, rapid
prototyping capability, and faster time to insight. In this paper, we discuss
the lessons learned while supporting interactive, on-demand high performance
computing from the perspectives of the users and the team supporting the users
and the system. Building on these lessons, we present an overview of current
needs and the technical solutions we are building to lower the barrier to entry
for new users from the humanities, social, and biological sciences.Comment: 15 pages, 3 figures, First Workshop on Interactive High Performance
Computing (WIHPC) 2018 held in conjunction with ISC High Performance 2018 in
Frankfurt, German
System-Level Design of Energy-Proportional Many-Core Servers for Exascale Computing
Continuous advances in manufacturing technologies are enabling the development of more powerful and compact high-performance computing (HPC) servers made of many-core processing architectures.
However, this soaring demand for computing power in the last years has grown faster than emiconductor technology evolution can sustain, and has produced as collateral undesirable effect a surge in power consumption and heat density in these new HPC servers, which result on significant performance degradation. In this keynote, I advocate to completely revise the current HPC
server architectures. In particular, inspired by the mammalian brain, I propose to design a disruptive three-dimensional (3D) computing
server architecture that overcomes the prevailing worst-case power and cooling provisioning paradigm for servers. This new 3D server design champions a new system-level thermal modeling, which can be
used by novel proactive energy controllers for detailed heat and energy management in many-core HPC servers, thanks to micro-scale liquid cooling. Then, I will show the impact of new near-threshold
computing architectures on server design, and how we can integrate new on-chip microfluidic fuel cell networks to enable energy-scalability in future generations of many-core HPC servers
targeting Exascale computing.Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech
Efficient Generation of Parallel Spin-images Using Dynamic Loop Scheduling
High performance computing (HPC) systems underwent a significant increase in
their processing capabilities. Modern HPC systems combine large numbers of
homogeneous and heterogeneous computing resources. Scalability is, therefore,
an essential aspect of scientific applications to efficiently exploit the
massive parallelism of modern HPC systems. This work introduces an efficient
version of the parallel spin-image algorithm (PSIA), called EPSIA. The PSIA is
a parallel version of the spin-image algorithm (SIA). The (P)SIA is used in
various domains, such as 3D object recognition, categorization, and 3D face
recognition. EPSIA refers to the extended version of the PSIA that integrates
various well-known dynamic loop scheduling (DLS) techniques. The present work:
(1) Proposes EPSIA, a novel flexible version of PSIA; (2) Showcases the
benefits of applying DLS techniques for optimizing the performance of the PSIA;
(3) Assesses the performance of the proposed EPSIA by conducting several
scalability experiments. The performance results are promising and show that
using well-known DLS techniques, the performance of the EPSIA outperforms the
performance of the PSIA by a factor of 1.2 and 2 for homogeneous and
heterogeneous computing resources, respectively
EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud
Cloud computing has become more popular in provision of computing resources
under virtual machine (VM) abstraction for high performance computing (HPC)
users to run their applications. A HPC cloud is such cloud computing
environment. One of challenges of energy efficient resource allocation for VMs
in HPC cloud is tradeoff between minimizing total energy consumption of
physical machines (PMs) and satisfying Quality of Service (e.g. performance).
On one hand, cloud providers want to maximize their profit by reducing the
power cost (e.g. using the smallest number of running PMs). On the other hand,
cloud customers (users) want highest performance for their applications. In
this paper, we focus on the scenario that scheduler does not know global
information about user jobs and user applications in the future. Users will
request shortterm resources at fixed start times and non interrupted durations.
We then propose a new allocation heuristic (named Energy-aware and Performance
per watt oriented Bestfit (EPOBF)) that uses metric of performance per watt to
choose which most energy-efficient PM for mapping each VM (e.g. maximum of MIPS
per Watt). Using information from Feitelson's Parallel Workload Archive to
model HPC jobs, we compare the proposed EPOBF to state of the art heuristics on
heterogeneous PMs (each PM has multicore CPU). Simulations show that the EPOBF
can reduce significant total energy consumption in comparison with state of the
art allocation heuristics.Comment: 10 pages, in Procedings of International Conference on Advanced
Computing and Applications, Journal of Science and Technology, Vietnamese
Academy of Science and Technology, ISSN 0866-708X, Vol. 51, No. 4B, 201
Study of Raspberry Pi 2 Quad-core Cortex A7 CPU Cluster as a Mini Supercomputer
High performance computing (HPC) devices is no longer exclusive for academic,
R&D, or military purposes. The use of HPC device such as supercomputer now
growing rapidly as some new area arise such as big data, and computer
simulation. It makes the use of supercomputer more inclusive. Todays
supercomputer has a huge computing power, but requires an enormous amount of
energy to operate. In contrast a single board computer (SBC) such as Raspberry
Pi has minimum computing power, but require a small amount of energy to
operate, and as a bonus it is small and cheap. This paper covers the result of
utilizing many Raspberry Pi 2 SBCs, a quad-core Cortex A7 900 MHz, as a cluster
to compensate its computing power. The high performance linpack (HPL) is used
to benchmark the computing power, and a power meter with resolution 10mV / 10mA
is used to measure the power consumption. The experiment shows that the
increase of number of cores in every SBC member in a cluster is not giving
significant increase in computing power. This experiment give a recommendation
that 4 nodes is a maximum number of nodes for SBC cluster based on the
characteristic of computing performance and power consumption.Comment: Pre-print of conference paper on International Conference on
Information Technology and Electrical Engineerin
The HPCG benchmark: analysis, shared memory preliminary improvements and evaluation on an Arm-based platform
The High-Performance Conjugate Gradient (HPCG) benchmark complements the LINPACK benchmark in the performance evaluation coverage of large High-Performance Computing (HPC) systems. Due to its lower arithmetic intensity and higher memory pressure, HPCG is recognized as a more representative benchmark for data-center and irregular memory access pattern workloads, therefore its popularity and acceptance is raising within the HPC community. As only a small fraction of the reference version of the HPCG benchmark is parallelized with shared memory techniques (OpenMP), we introduce in this report two OpenMP parallelization methods. Due to the increasing importance of Arm architecture in the HPC scenario, we evaluate our HPCG code at scale on a state-of-the-art HPC system based on Cavium ThunderX2 SoC. We consider our work as a contribution to the Arm ecosystem: along with this technical report, we plan in fact to release our code for boosting the tuning of the HPCG benchmark within the Arm community.Postprint (author's final draft
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
High Performance Computing (HPC) clouds are becoming an alternative to
on-premise clusters for executing scientific applications and business
analytics services. Most research efforts in HPC cloud aim to understand the
cost-benefit of moving resource-intensive applications from on-premise
environments to public cloud platforms. Industry trends show hybrid
environments are the natural path to get the best of the on-premise and cloud
resources---steady (and sensitive) workloads can run on on-premise resources
and peak demand can leverage remote resources in a pay-as-you-go manner.
Nevertheless, there are plenty of questions to be answered in HPC cloud, which
range from how to extract the best performance of an unknown underlying
platform to what services are essential to make its usage easier. Moreover, the
discussion on the right pricing and contractual models to fit small and large
users is relevant for the sustainability of HPC clouds. This paper brings a
survey and taxonomy of efforts in HPC cloud and a vision on what we believe is
ahead of us, including a set of research challenges that, once tackled, can
help advance businesses and scientific discoveries. This becomes particularly
relevant due to the fast increasing wave of new HPC applications coming from
big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR
Hierarchical Parallelisation of Functional Renormalisation Group Calculations -- hp-fRG
The functional renormalisation group (fRG) has evolved into a versatile tool
in condensed matter theory for studying important aspects of correlated
electron systems. Practical applications of the method often involve a high
numerical effort, motivating the question in how far High Performance Computing
(HPC) can leverage the approach. In this work we report on a multi-level
parallelisation of the underlying computational machinery and show that this
can speed up the code by several orders of magnitude. This in turn can extend
the applicability of the method to otherwise inaccessible cases. We exploit
three levels of parallelisation: Distributed computing by means of Message
Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means
of SIMD units (single-instruction-multiple-data). Results are provided for two
distinct High Performance Computing (HPC) platforms, namely the IBM-based
BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster.
We discuss how certain issues and obstacles were overcome in the course of
adapting the code. Most importantly, we conclude that this vast improvement can
actually be accomplished by introducing only moderate changes to the code, such
that this strategy may serve as a guideline for other researcher to likewise
improve the efficiency of their codes
- …
