1,598 research outputs found

    The ESCAPE project : Energy-efficient Scalable Algorithms for Weather Prediction at Exascale

    Get PDF
    In the simulation of complex multi-scale flows arising in weather and climate modelling, one of the biggest challenges is to satisfy strict service requirements in terms of time to solution and to satisfy budgetary constraints in terms of energy to solution, without compromising the accuracy and stability of the application. These simulations require algorithms that minimise the energy footprint along with the time required to produce a solution, maintain the physically required level of accuracy, are numerically stable, and are resilient in case of hardware failure. The European Centre for Medium-Range Weather Forecasts (ECMWF) led the ESCAPE (Energy-efficient Scalable Algorithms for Weather Prediction at Exascale) project, funded by Horizon 2020 (H2020) under the FET-HPC (Future and Emerging Technologies in High Performance Computing) initiative. The goal of ESCAPE was to develop a sustainable strategy to evolve weather and climate prediction models to next-generation computing technologies. The project partners incorporate the expertise of leading European regional forecasting consortia, university research, experienced high-performance computing centres, and hardware vendors. This paper presents an overview of the ESCAPE strategy: (i) identify domain-specific key algorithmic motifs in weather prediction and climate models (which we term Weather & Climate Dwarfs), (ii) categorise them in terms of computational and communication patterns while (iii) adapting them to different hardware architectures with alternative programming models, (iv) analyse the challenges in optimising, and (v) find alternative algorithms for the same scheme. The participating weather prediction models are the following: IFS (Integrated Forecasting System); ALARO, a combination of AROME (Application de la Recherche a l'Operationnel a Meso-Echelle) and ALADIN (Aire Limitee Adaptation Dynamique Developpement International); and COSMO-EULAG, a combination of COSMO (Consortium for Small-scale Modeling) and EULAG (Eulerian and semi-Lagrangian fluid solver). For many of the weather and climate dwarfs ESCAPE provides prototype implementations on different hardware architectures (mainly Intel Skylake CPUs, NVIDIA GPUs, Intel Xeon Phi, Optalysys optical processor) with different programming models. The spectral transform dwarf represents a detailed example of the co-design cycle of an ESCAPE dwarf. The dwarf concept has proven to be extremely useful for the rapid prototyping of alternative algorithms and their interaction with hardware; e.g. the use of a domain-specific language (DSL). Manual adaptations have led to substantial accelerations of key algorithms in numerical weather prediction (NWP) but are not a general recipe for the performance portability of complex NWP models. Existing DSLs are found to require further evolution but are promising tools for achieving the latter. Measurements of energy and time to solution suggest that a future focus needs to be on exploiting the simultaneous use of all available resources in hybrid CPU-GPU arrangements

    Integrating Hybrid Off-grid Systems with Battery Storage: Key Performance Indicators

    Get PDF
    A clear opportunity exists for the integration of Battery Energy Storage Systems (BESS) in hybrid off-grid applications, i.e., isolated grids with renewable sources (e.g. PV, wind) and small-scale diesel generators. In these applications, renewable sources have the potential to reduce petroleum derivatives consumption (diesel, lubricants, etc.) and reduce Greenhouse Gases emissions. Therefore, in recent years the changes in the economics of renewables and, particularly, PV sources, have led to their integration with diesel generators in order to reduce the Operational Expenditure (OPEX) of off-grid systems. BESS present the capability of maximizing the integration of renewable energy and, consequently, further offset the use of diesel-fired generating units. The purpose of this work is twofold: First, the objective is to identify the Key Performance Indicators (KPI) for assessing the integration of Hybrid Off-grid Systems with BESS. Second, these KPI, reflecting the potential impacts of battery Storage within an Hybrid Off-grid system, will enable the assessment of the business case of the BESS integration

    Optimisation of stand-alone hybrid energy systems for power and thermal loads

    Get PDF
    Stand-alone hybrid energy systems are an attractive option for remote communities without a connection to a main power grid. However, the intermittent nature of solar and other renewable sources adversely affects the reliability with which these systems respond to load demands. Hybridisation, achieved by combining renewables with combustion-based supplementary prime movers, improves the ability to meet electric load requirements. In addition, the waste heat generated from backup Internal Combustion Engines or Micro Gas Turbines can be used to satisfy local heating and cooling loads. As a result, there is an expectation that the overall efficiency and Greenhouse Gas Emissions of stand-alone systems can be significantly improved through waste heat recovery. The aims of this PhD project are to identify how incremental increases to the hardware complexity of hybridised stand-alone energy systems affect their cost, efficiency, and CO2 footprint. The research analyses a range of systems, from those designed to meet only power requirements to others satisfying power and heating (Combined Heat and Power), or power plus both heating and cooling (Combined Cooling, Heating, and Power). The majority of methods used focus on MATLAB-based Genetic Algorithms (GAs). The modelling deployed finds the optimal selection of hardware configurations which satisfy single- or multi-objective functions (i.e. Cost of Energy, energy efficiency, and exergy efficiency). This is done in the context of highly dynamic meteorological (e.g. solar irradiation) and load data (i.e. electric, heating, and cooling). Results indicate that the type of supplementary prime movers (ICEs or MGT) and their minimum starting thresholds have insignificant effects on COE but have some effects on Renewable Penetration (RP), Life Cycle Emissions (LCE), CO2 emissions, and waste heat generation when the system is sized meeting electric load only. However, the transient start-up time of supplementary prime movers and temporal resolution have no significant effects on sizing optimisation. The type of Power Management Strategies (Following Electric Load-FEL, and Following Electric and Following Thermal Load- FEL/FTL) affect overall Combined Heating and Power (CHP) efficiency and meeting thermal demand through recovered heat for a system meeting electric and heating load with response to a specific load meeting reliability (Loss of Power Supply Probability-LPSP). However, the PMS has marginal effects on COE. The Electric to Thermal Load Ratio (ETLR) has no effects on COE for PV/Batt/ICE but strongly affects PV/Batt/MGT-based hybridised CHP systems. The higher thermal than the electric loads lead to higher efficiency and better environmental footprint. Results from this study also indicate that for a stand-alone hybridised system operating under FEL/FTL type PMS, the power only system has lower cost compared to the CHP and the Combined Cooling, Heating, and Power (CCHP) systems. This occurs at the expense of overall energy and exergy efficiencies. Additionally, the relative magnitude of heating and cooling loads have insignificant effects on COE for PV/Batt/ICE-based system configurations, however this substantially affects PV/Batt/MGT-based hybridised CCHP systems. Although there are no significant changes in the overall energy efficiency of CCHP systems in relation to variations to heating and cooling loads, systems with higher heating demand than cooling demand lead to better environmental benefits and renewable penetration at the cost of Duty Factor. Results also reveal that the choice of objective functions do not affect the system optimisation significantly

    VIRTUE : integrating CFD ship design

    Get PDF
    Novel ship concepts, increasing size and speed, and strong competition in the global maritime market require that a ship's hydrodynamic performance be studied at the highest level of sophistication. All hydrodynamic aspects need to be considered so as to optimize trade-offs between resistance, propulsion (and cavitation), seakeeping or manoeuvring. VIRTUE takes a holistic approach to hydrodynamic design and focuses on integrating advanced CFD tools in a software platform that can control and launch multi-objective hydrodynamic design projects. In this paper current practice, future requirements and a potential software integration platform are presented. The necessity of parametric modelling as a means of effectively generating and efficiently varying geometry, and the added-value of advanced visualization, is discussed. An illustrating example is given as a test case, a container carrier investigation, and the requirements and a proposed architecture for the platform are outlined

    LHCb distributed data analysis on the computing grid

    Get PDF
    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid

    The ESCAPE project: Energy-efficient Scalable Algorithms for Weather Prediction at Exascale

    Get PDF
    Abstract. In the simulation of complex multi-scale flows arising in weather and climate modelling, one of the biggest challenges is to satisfy strict service requirements in terms of time to solution and to satisfy budgetary constraints in terms of energy to solution, without compromising the accuracy and stability of the application. These simulations require algorithms that minimise the energy footprint along with the time required to produce a solution, maintain the physically required level of accuracy, are numerically stable, and are resilient in case of hardware failure. The European Centre for Medium-Range Weather Forecasts (ECMWF) led the ESCAPE (Energy-efficient Scalable Algorithms for Weather Prediction at Exascale) project, funded by Horizon 2020 (H2020) under the FET-HPC (Future and Emerging Technologies in High Performance Computing) initiative. The goal of ESCAPE was to develop a sustainable strategy to evolve weather and climate prediction models to next-generation computing technologies. The project partners incorporate the expertise of leading European regional forecasting consortia, university research, experienced high-performance computing centres, and hardware vendors. This paper presents an overview of the ESCAPE strategy: (i) identify domain-specific key algorithmic motifs in weather prediction and climate models (which we term Weather & Climate Dwarfs), (ii) categorise them in terms of computational and communication patterns while (iii) adapting them to different hardware architectures with alternative programming models, (iv) analyse the challenges in optimising, and (v) find alternative algorithms for the same scheme. The participating weather prediction models are the following: IFS (Integrated Forecasting System); ALARO, a combination of AROME (Application de la Recherche à l'Opérationnel à Meso-Echelle) and ALADIN (Aire Limitée Adaptation Dynamique Développement International); and COSMO–EULAG, a combination of COSMO (Consortium for Small-scale Modeling) and EULAG (Eulerian and semi-Lagrangian fluid solver). For many of the weather and climate dwarfs ESCAPE provides prototype implementations on different hardware architectures (mainly Intel Skylake CPUs, NVIDIA GPUs, Intel Xeon Phi, Optalysys optical processor) with different programming models. The spectral transform dwarf represents a detailed example of the co-design cycle of an ESCAPE dwarf. The dwarf concept has proven to be extremely useful for the rapid prototyping of alternative algorithms and their interaction with hardware; e.g. the use of a domain-specific language (DSL). Manual adaptations have led to substantial accelerations of key algorithms in numerical weather prediction (NWP) but are not a general recipe for the performance portability of complex NWP models. Existing DSLs are found to require further evolution but are promising tools for achieving the latter. Measurements of energy and time to solution suggest that a future focus needs to be on exploiting the simultaneous use of all available resources in hybrid CPU–GPU arrangements

    Peer to Peer Information Retrieval: An Overview

    Get PDF
    Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom

    The reconstruction of digital holograms on a computational grid

    Get PDF
    Digital holography is greatly extending the range ofholography's applications and moving it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical development and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count leads to the possibilities of larger sample volumes, while smaller-area pixels enable the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in large-scale distributed processing - provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. We describe here the reconstruction of digital holograms on a trans-national computational Grid with over 10 000 nodes available at over 100 sites. A simplistic scheme of deployment was found to provide no computational advantage over a single powerful workstation. Based on these experiences we suggest an improved strategy for workflow and job execution for the replay ofdigital holograms on a Grid

    Predictive analysis and optimisation of pipelined wavefront applications using reusable analytic models

    Get PDF
    Pipelined wavefront computations are an ubiquitous class of high performance parallel algorithms used for the solution of many scientific and engineering applications. In order to aid the design and optimisation of these applications, and to ensure that during procurement platforms are chosen best suited to these codes, there has been considerable research in analysing and evaluating their operational performance. Wavefront codes exhibit complex computation, communication, synchronisation patterns, and as a result there exist a large variety of such codes and possible optimisations. The problem is compounded by each new generation of high performance computing system, which has often introduced a previously unexplored architectural trait, requiring previous performance models to be rewritten and reevaluated. In this thesis, we address the performance modelling and optimisation of this class of application, as a whole. This differs from previous studies in which bespoke models are applied to specific applications. The analytic performance models are generalised and reusable, and we demonstrate their application to the predictive analysis and optimisation of pipelined wavefront computations running on modern high performance computing systems. The performance model is based on the LogGP parameterisation, and uses a small number of input parameters to specify the particular behaviour of most wavefront codes. The new parameters and model equations capture the key structural and behavioural differences among different wavefront application codes, providing a succinct summary of the operations for each application and insights into alternative wavefront application design. The models are applied to three industry-strength wavefront codes and are validated on several systems including a Cray XT3/XT4 and an InfiniBand commodity cluster. Model predictions show high quantitative accuracy (less than 20% error) for all high performance configurations and excellent qualitative accuracy. The thesis presents applications, projections and insights for optimisations using the model, which show the utility of reusable analytic models for performance engineering of high performance computing codes. In particular, we demonstrate the use of the model for: (1) evaluating application configuration and resulting performance; (2) evaluating hardware platform issues including platform sizing, configuration; (3) exploring hardware platform design alternatives and system procurement and, (4) considering possible code and algorithmic optimisations
    corecore