1,395 research outputs found
Applications, tools and techniques on the road to exascale computing
This volume of the book series “Advances in Parallel Computing” contains the proceedings of ParCo2011, the 14th biennial ParCo Conference, held from 31 August to 3 September 2011, in Ghent, Belgium. In an era when physical limitations have slowed down advances in the performance of single processing units, and new scientific challenges require exascale speed, parallel processing has gained momentum as a key gateway to HPC (High Performance Computing). Historically, the ParCo conferences have focused on three main themes: Algorithms, Architectures (both hardware and software) and Applications. Nowadays, the scenery has changed from traditional multiprocessor topologies to heterogeneous manycores, incorporating standard CPUs, GPUs (Graphics Processing Units) and FPGAs (Field Programmable Gate Arrays). These platforms are, at a higher abstraction level, integrated in clusters, grids, and clouds. This is reflected in the papers presented at the conference and the contributions as included in these proceedings. An increasing number of new algorithms are optimized for heterogeneous platforms and performance tuning is targeting extreme scale computing. Heterogeneous platforms utilising the compute power and energy efficiency of GPGPUs (General Purpose GPUs) are clearly becoming mainstream HPC systems for a large number of applications in a wide spectrum of application areas. These systems excel in areas such as complex system simulation, real-time image processing and visualisation, etc. High performance computing accelerators may well become the cornerstone of exascale computing applications such as 3-D turbulent combustion flows, nuclear energy simulations, brain research, financial and geophysical modelling. The exploration of new architectures, programming tools and techniques was evidenced by the mini-symposia “Parallel Computing with FPGAs” and “Exascale Programming Models”. The need for exascale hardware and software was also stressed in the industrial session, with contributions from Cray and the European exascale software initiative. Our sincere appreciation goes to the keynote speakers who gave their perspectives on the impact of parallel computing today and the road to exascale computing tomorrow. Our heartfelt thanks go to the authors for their valuable scientific contributions and to the programme committee who reviewed the papers and provided constructive remarks. The international audience was inspired by the quality of the presentations. The attendance and interaction was high and the conference has been an agora where many fruitful ideas were exchanged and explored. We wish to express our sincere thanks to the organizers for the smooth operation of the conference. The University conference centre Het Pand offered an excellent environment for the conference as it allowed delegates to interact informally and easily. A special word of thanks is due to the management and support staff of Het Pand for their proficient and friendly support. The organizers managed to put together an extensive social programme. This included a reception at the medieval Town Hall of Ghent as well as a memorable conference dinner. These social events stimulated interaction amongst delegates and resulted in many new contacts being made. Finally we wish to thank all the many supporters who assisted in the organization and successful running of the event. Erik D'Hollander, Ghent University, Belgium Koen De Bosschere, Ghent University, Belgium Gerhard R. Joubert, TU Clausthal, Germany David Padua, University of Illinois, USA Frans Peters, Philips Research, Netherland
04451 Abstracts Collection -- Future Generation Grids
The Dagstuhl Seminar 04451 "Future Generation Grid" was held in the International
Conference and Research Center (IBFI), Schloss Dagstuhl from 1st
to 5th November 2004. The focus of the seminar was on open problems and
future challenges in the design of next generation Grid systems. A total of 45
participants presented their current projects, research plans, and new ideas in
the area of Grid technologies. Several evening sessions with vivid discussions
on future trends complemented the talks. This report gives an overview of the
background and the findings of the seminar
Achieving High Speed CFD simulations: Optimization, Parallelization, and FPGA Acceleration for the unstructured DLR TAU Code
Today, large scale parallel simulations are fundamental tools to handle complex problems. The number of processors in current computation platforms has been recently increased and therefore it is necessary to optimize the application performance and to enhance the scalability of massively-parallel systems. In addition, new heterogeneous architectures, combining conventional processors with specific hardware, like FPGAs, to accelerate the most time consuming functions are considered as a strong alternative to boost the performance.
In this paper, the performance of the DLR TAU code is analyzed and optimized. The improvement of the code efficiency is addressed through three key activities: Optimization, parallelization and hardware acceleration. At first, a profiling analysis of the most time-consuming processes of the Reynolds Averaged Navier Stokes flow solver on a three-dimensional unstructured mesh is performed. Then, a study of the code scalability with new partitioning algorithms are tested to show the most suitable partitioning algorithms for the selected applications. Finally, a feasibility study on the application of FPGAs and GPUs for the hardware acceleration of CFD simulations is presented
Recommended from our members
A classification of emerging and traditional grid systems
The grid has evolved in numerous distinct phases. It started in the early ’90s as a model of metacomputing in which supercomputers share resources; subsequently, researchers added the ability to share data. This is usually referred to as the first-generation grid. By the late ’90s, researchers had outlined the framework for second-generation grids, characterized by their use of grid middleware systems to “glue” different grid technologies together. Third-generation grids originated in the early millennium when Web technology was combined with second-generation grids. As a result, the invisible grid, in which grid complexity is fully hidden through resource virtualization, started receiving attention. Subsequently, grid researchers identified the requirement for semantically rich knowledge grids, in which middleware technologies are more intelligent and autonomic. Recently, the necessity for grids to support and extend the ambient intelligence vision has emerged. In AmI, humans are surrounded by computing technologies that are unobtrusively embedded in their surroundings.
However, third-generation grids’ current architecture doesn’t meet the requirements of next-generation grids (NGG) and service-oriented knowledge utility (SOKU).4 A few years ago, a group of independent experts, arranged by the European Commission, identified these shortcomings as a way to identify potential European grid research priorities for 2010 and beyond. The experts envision grid systems’ information, knowledge, and processing capabilities as a set of utility services.3 Consequently, new grid systems are emerging to materialize these visions. Here, we review emerging grids and classify them to motivate further research and help establish a solid foundation in this rapidly evolving area
Flexible multi-layer virtual machine design for virtual laboratory in distributed systems and grids.
We propose a flexible Multi-layer Virtual Machine (MVM) design intended to improve efficiencies in distributed and grid computing and to overcome the known current problems that exist within traditional virtual machine architectures and those used in distributed and grid systems. This thesis presents a novel approach to building a virtual laboratory to support e-science by adapting MVMs within the distributed systems and grids, thereby providing enhanced flexibility and reconfigurability by raising the level of abstraction. The MVM consists of three layers. They are OS-level VM, queue VMs, and components VMs. The group of MVMs provides the virtualized resources, virtualized networks, and reconfigurable components layer for virtual laboratories. We demonstrate how our reconfigurable virtual machine can allow software designers and developers to reuse parallel communication patterns. In our framework, the virtual machines can be created on-demand and their applications can be distributed at the source-code level, compiled and instantiated in runtime. (Abstract shortened by UMI.) Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .K56. Source: Masters Abstracts International, Volume: 44-03, page: 1405. Thesis (M.Sc.)--University of Windsor (Canada), 2005
Operating System Concepts for Reconfigurable Computing: Review and Survey
One of the key future challenges for reconfigurable computing is to enable higher design productivity and a more easy way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system. This article gives historical review and a summary on ideas and key concepts to include reconfigurable computing aspects in operating systems. The article also presents an overview on published and available operating systems targeting the area of reconfigurable computing. The purpose of this article is to identify and summarize common patterns among those systems that can be seen as de facto standard. Furthermore, open problems, not covered by these already available systems, are identified
Innovative in silico approaches to address avian flu using grid technology
The recent years have seen the emergence of diseases which have spread very
quickly all around the world either through human travels like SARS or animal
migration like avian flu. Among the biggest challenges raised by infectious
emerging diseases, one is related to the constant mutation of the viruses which
turns them into continuously moving targets for drug and vaccine discovery.
Another challenge is related to the early detection and surveillance of the
diseases as new cases can appear just anywhere due to the globalization of
exchanges and the circulation of people and animals around the earth, as
recently demonstrated by the avian flu epidemics. For 3 years now, a
collaboration of teams in Europe and Asia has been exploring some innovative in
silico approaches to better tackle avian flu taking advantage of the very large
computing resources available on international grid infrastructures. Grids were
used to study the impact of mutations on the effectiveness of existing drugs
against H5N1 and to find potentially new leads active on mutated strains. Grids
allow also the integration of distributed data in a completely secured way. The
paper presents how we are currently exploring how to integrate the existing
data sources towards a global surveillance network for molecular epidemiology.Comment: 7 pages, submitted to Infectious Disorders - Drug Target
Parallel computing 2011, ParCo 2011: book of abstracts
This book contains the abstracts of the presentations at the conference Parallel Computing 2011, 30 August - 2 September 2011, Ghent, Belgiu
- …