52 research outputs found
Astrophysical code migration into Exascale Era
The ExaNeSt and EuroExa H2020 EU-funded projects aim to design and develop an
exascale ready computing platform prototype based on low-energy-consumption
ARM64 cores and FPGA accelerators. We participate in the application-driven
design of the hardware solutions and prototype validation. To carry on this
work we are using, among others, Hy-Nbody, a state-of-the-art direct N-body
code. Core algorithms of Hy-Nbody have been improved in such a way to
increasingly fit them to the exascale target platform. Waiting for the ExaNest
prototype release, we are performing tests and code tuning operations on an
ARM64 SoC facility: a SLURM managed HPC cluster based on 64-bit ARMv8
Cortex-A72/Cortex-A53 core design and powered by a Mali-T864 embedded GPU. In
parallel, we are porting a kernel of Hy-Nbody on FPGA aiming to test and
compare the performance-per-watt of our algorithms on different platforms. In
this paper we describe how we re-engineered the application and we show first
results on ARM SoC.Comment: 4 pages, 1 figure, 1 table; proceedings of ADASS XXVIII, accepted by
ASP Conference Serie
TIPS: an integrated service provider over open archives
The author presents a service provider, an european project among Trieste, I (SISSA), Udine, I (University), London, UK (City University), Grenoble, F (IMAG), Geneva, CH (CERN) and Bristol, UK (IoP)
Interoperable geographically distributed astronomical infrastructures: technical solutions
The increase of astronomical data produced by a new generation of
observational tools poses the need to distribute data and to bring computation
close to the data. Trying to answer this need, we set up a federated data and
computing infrastructure involving an international cloud facility, EGI
federated, and a set of services implementing IVOA standards and
recommendations for authentication, data sharing and resource access. In this
paper we describe technical problems faced, specifically we show the designing,
technological and architectural solutions adopted. We depict our technological
overall solution to bring data close to computation resources. Besides the
adopted solutions, we propose some points for an open discussion on
authentication and authorization mechanisms.Comment: 4 pages, 1 figure, submitted to Astronomical Society of the Pacific
(ASP
Rosetta: a container-centric science platform for resource-intensive, interactive data analysis
Rosetta is a science platform for resource-intensive, interactive data analysis which runs user tasks as software containers. It is built on top of a novel architecture based on framing user tasks as microservices - independent and self-contained units - which allows to fully support custom and user-defined software packages, libraries and environments. These include complete remote desktop and GUI applications, besides common analysis environments as the Jupyter Notebooks. Rosetta relies on Open Container Initiative containers, which allow for safe, effective and reproducible code execution; can use a number of container engines and runtimes; and seamlessly supports several workload management systems, thus enabling containerized workloads on a wide range of computing resources. Although developed in the astronomy and astrophysics space, Rosetta can virtually support any science and technology domain where resource-intensive, interactive data analysis is required
Software acceleration on Xilinx FPGAs using OmpSs@FPGA ecosystem
The OmpSs@FPGA programming model allows offloading application functionality to Xilinx Field Programmable Gate Arrays (FPGAs). The OmpSs compiler splits the code (written in C/C++ high level language) in two parts, targeting the host and the FPGA. The first is usually compiled by the GNU Compiler Collection (GCC), while the latter is given to the Xilinx Vivado HLS tool (hereafter HLS) for high level synthesis to VHDL and bitstream used to program the FPGA. OmpSs@FPGA is based on compiler directives, which allow the programmer to annotate the part of the code to automatically exploit all Symmetric MultiProcessor system (smp) and FPGA resource available in the execution platform.
This technical report provides both descriptive and hands-on introductions to
build application-specific FPGA systems using the high-level OmpSs@FPGA tool.
The goal is to give the reader a baseline view of the process of creating an optimized hardware design annotating C-based code with HLS directives. We assume the reader has a working knowledge of C/C++, and familiarity with basic computer architecture concepts (e.g. speedup, parallelism, pipelining)
Rosetta: A container-centric science platform for resource-intensive, interactive data analysis
Rosetta is a science platform for resource-intensive, interactive data analysis which runs user tasks as software containers. It is built on top of a novel architecture based on framing user tasks as microservices – independent and self-contained units – which allows to fully support custom and user-defined software packages, libraries and environments. These include complete remote desktop and GUI applications, besides common analysis environments as the Jupyter Notebooks. Rosetta relies on Open Container Initiative containers, which allow for safe, effective and reproducible code execution; can use a number of container engines and runtimes; and seamlessly supports several workload management systems, thus enabling containerized workloads on a wide range of computing resources. Although developed in the astronomy and astrophysics space, Rosetta can virtually support any science and technology domain where resource-intensive, interactive data analysis is required
All the Shades of the Cloud
Cloud computing is a powerful technology that in the last decade revolutionised computing and storage in particular for Industry and Private Sectors. Today, large investments are ongoing to build Cloud Infrastructures at National or International level (e.g. the European Open Science Cloud initiative). Also, scientists are approaching commercial and private Clouds at different scales: single researchers test the Clouds for small research projects, at the same time large international collaborations are evaluating Cloud technology to collect, process, analyse, archive and curate their data. In this paper, we discuss the use of Cloud in Astrophysics at different scales using some examples and we present future trends and possibilities that the use of Cloud computing and its convergence with high performance computing will open: from high-end data analysis to high performance data analytics, from scientific computing to data analytics
Building an interoperable, distributed storage and authorization system
A joint project between the Canadian Astronomy Data Center of the National Research Council Canada, and the italian Istituto Nazionale di Astrofisica-Osservatorio Astronomico di Trieste (INAF-OATs), partially funded by the EGI-Engage H2020 European Project, is devoted to deploy an integrated infrastructure, based on the International Virtual Observatory Alliance (IVOA) standards, to access and exploit astronomical data. Currently CADC-CANFAR provides scientists with an access, storage and computation facility, based on software libraries implementing a set of standards developed by the International Virtual Observatory Alliance (IVOA). The deployment of a twin infrastructure, basically built on the same open source software libraries, has been started at INAF-OATs. This new infrastructure now provides users with an Access Control Service and a Storage Service. The final goal of the ongoing project is to build an integrated infrastructure geographycally distributed providing complete interoperability, both in users access control and data sharing. This paper describes the target infrastructure, the main user requirements covered, the technical choices and the implemented solutions
Performance and energy footprint assessment of FPGAs and GPUs on HPC systems using Astrophysics application
New challenges in Astronomy and Astrophysics (AA) are urging the need for a
large number of exceptionally computationally intensive simulations. "Exascale"
(and beyond) computational facilities are mandatory to address the size of
theoretical problems and data coming from the new generation of observational
facilities in AA. Currently, the High Performance Computing (HPC) sector is
undergoing a profound phase of innovation, in which the primary challenge to
the achievement of the "Exascale" is the power-consumption. The goal of this
work is to give some insights about performance and energy footprint of
contemporary architectures for a real astrophysical application in an HPC
context. We use a state-of-the-art N-body application that we re-engineered and
optimized to exploit the heterogeneous underlying hardware fully. We
quantitatively evaluate the impact of computation on energy consumption when
running on four different platforms. Two of them represent the current HPC
systems (Intel-based and equipped with NVIDIA GPUs), one is a micro-cluster
based on ARM-MPSoC, and one is a "prototype towards Exascale" equipped with
ARM-MPSoCs tightly coupled with FPGAs. We investigate the behavior of the
different devices where the high-end GPUs excel in terms of time-to-solution
while MPSoC-FPGA systems outperform GPUs in power consumption. Our experience
reveals that considering FPGAs for computationally intensive application seems
very promising, as their performance is improving to meet the requirements of
scientific applications. This work can be a reference for future platforms
development for astrophysics applications where computationally intensive
calculations are required.Comment: 15 pages, 4 figures, 3 tables; Preprint (V2) submitted to MDPI
(Special Issue: Energy-Efficient Computing on Parallel Architectures
- …