195,042 research outputs found
Recommended from our members
Regional-scale fault-to-structure earthquake simulations with the EQSIM framework: Workflow maturation and computational performance on GPU-accelerated exascale platforms
Continuous advancements in scientific and engineering understanding of earthquake phenomena, combined with the associated development of representative physics-based models, is providing a foundation for high-performance, fault-to-structure earthquake simulations. However, regional-scale applications of high-performance models have been challenged by the computational requirements at the resolutions required for engineering risk assessments. The EarthQuake SIMulation (EQSIM) framework, a software application development under the US Department of Energy (DOE) Exascale Computing Project, is focused on overcoming the existing computational barriers and enabling routine regional-scale simulations at resolutions relevant to a breadth of engineered systems. This multidisciplinary software developmentâdrawing upon expertise in geophysics, engineering, applied math and computer scienceâis preparing the advanced computational workflow necessary to fully exploit the DOEâs exaflop computer platforms coming online in the 2023 to 2024 timeframe. Achievement of the computational performance required for high-resolution regional models containing upward of hundreds of billions to trillions of model grid points requires numerical efficiency in every phase of a regional simulation. This includes run time start-up and regional model generation, effective distribution of the computational workload across thousands of computer nodes, efficient coupling of regional geophysics and local engineering models, and application-tailored highly efficient transfer, storage, and interrogation of very large volumes of simulation data. This article summarizes the most recent advancements and refinements incorporated in the workflow design for the EQSIM integrated fault-to-structure framework, which are based on extensive numerical testing across multiple graphics processing unit (GPU)-accelerated platforms, and demonstrates the computational performance achieved on the worldâs first exaflop computer platform through representative regional-scale earthquake simulations for the San Francisco Bay Area in California, USA
European White Book on Real-Time Power Hardware in the Loop Testing : DERlab Report No. R- 005.0
The European White Book on Real-Time-Powerhardware-in-the-Loop testing is intended to serve as a reference document on the future of testing of electrical power equipment, with speciïŹ c focus on the emerging hardware-in-the-loop activities and application thereof within testing facilities and procedures. It will provide an outlook of how this powerful tool can be utilised to support the development, testing and validation of speciïŹ cally DER equipment. It aims to report on international experience gained thus far and provides case studies on developments and speciïŹ c technical issues, such as the hardware/software interface. This white book compliments the already existing series of DERlab European white books, covering topics such as grid-inverters and grid-connected storag
A DevOps approach to integration of software components in an EU research project
We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot
International White Book on DER Protection : Review and Testing Procedures
This white book provides an insight into the issues surrounding the impact of increasing levels of DER on the generator and network protection and the resulting necessary improvements in protection testing practices. Particular focus is placed on ever increasing inverter-interfaced DER installations and the challenges of utility network integration. This white book should also serve as a starting point for specifying DER protection testing requirements and procedures. A comprehensive review of international DER protection practices, standards and recommendations is presented. This is accompanied by the identiïŹ cation of the main performance challenges related to these protection schemes under varied network operational conditions and the nature of DER generator and interface technologies. Emphasis is placed on the importance of dynamic testing that can only be delivered through laboratory-based platforms such as real-time simulators, integrated substation automation infrastructure and ïŹ exible, inverter-equipped testing microgrids. To this end, the combination of ïŹ exible network operation and new DER technologies underlines the importance of utilising the laboratory testing facilities available within the DERlab Network of Excellence. This not only informs the shaping of new protection testing and network integration practices by end users but also enables the process of de-risking new DER protection technologies. In order to support the issues discussed in the white paper, a comparative case study between UK and German DER protection and scheme testing practices is presented. This also highlights the level of complexity associated with standardisation and approval mechanisms adopted by different countries
The Locus Algorithm III: A Grid Computing system to generate catalogues of optimised pointings for Differential Photometry
This paper discusses the hardware and software components of the Grid
Computing system used to implement the Locus Algorithm to identify optimum
pointings for differential photometry of 61,662,376 stars and 23,799 quasars.
The scale of the data, together with initial operational assessments demanded a
High Performance Computing (HPC) system to complete the data analysis. Grid
computing was chosen as the HPC solution as the optimum choice available within
this project. The physical and logical structure of the National Grid computing
Infrastructure informed the approach that was taken. That approach was one of
layered separation of the different project components to enable maximum
flexibility and extensibility
- âŠ