2,371 research outputs found
VisIVO - Integrated Tools and Services for Large-Scale Astrophysical Visualization
VisIVO is an integrated suite of tools and services specifically designed for
the Virtual Observatory. This suite constitutes a software framework for
effective visual discovery in currently available (and next-generation) very
large-scale astrophysical datasets. VisIVO consists of VisiVO Desktop - a stand
alone application for interactive visualization on standard PCs, VisIVO Server
- a grid-enabled platform for high performance visualization and VisIVO Web - a
custom designed web portal supporting services based on the VisIVO Server
functionality. The main characteristic of VisIVO is support for
high-performance, multidimensional visualization of very large-scale
astrophysical datasets. Users can obtain meaningful visualizations rapidly
while preserving full and intuitive control of the relevant visualization
parameters. This paper focuses on newly developed integrated tools in VisIVO
Server allowing intuitive visual discovery with 3D views being created from
data tables. VisIVO Server can be installed easily on any web server with a
database repository. We discuss briefly aspects of our implementation of VisiVO
Server on a computational grid and also outline the functionality of the
services offered by VisIVO Web. Finally we conclude with a summary of our work
and pointers to future developments
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
High Performance Computing (HPC) clouds are becoming an alternative to
on-premise clusters for executing scientific applications and business
analytics services. Most research efforts in HPC cloud aim to understand the
cost-benefit of moving resource-intensive applications from on-premise
environments to public cloud platforms. Industry trends show hybrid
environments are the natural path to get the best of the on-premise and cloud
resources---steady (and sensitive) workloads can run on on-premise resources
and peak demand can leverage remote resources in a pay-as-you-go manner.
Nevertheless, there are plenty of questions to be answered in HPC cloud, which
range from how to extract the best performance of an unknown underlying
platform to what services are essential to make its usage easier. Moreover, the
discussion on the right pricing and contractual models to fit small and large
users is relevant for the sustainability of HPC clouds. This paper brings a
survey and taxonomy of efforts in HPC cloud and a vision on what we believe is
ahead of us, including a set of research challenges that, once tackled, can
help advance businesses and scientific discoveries. This becomes particularly
relevant due to the fast increasing wave of new HPC applications coming from
big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR
High Performance Air Quality Simulation in the European CrossGrid Project
This paper focuses on one of the applications involved into the CrossGrid project, the STEM-II air pollution model used to simulate the environment of As Pontes Power Plant in A Coruna (Spain). The CrossGrid project offers us a Grid environment oriented towards computation- and data-intensive applications that need interaction with an external user. The air pollution model needs the interaction of an expert in order to make decisions about modifications in the industrial process to fulfil the European standard on emissions and air quality. The benefits of using different CrossGrid components for running the application on a Grid infrastructure are shown in this paper, and some preliminary results on the CrossGrid testbed are displayed
A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing
Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.IngenierÃa, Industria y Construcció
A Supportive Framework for the Development of a Digital Twin for Wind Turbines Using Open-Source Software Tiril Malmedal Mechanics and Process Technology
The world is facing a global climate crisis. Renewable energy is one of the big solutions, nevertheless, there are technological challenges. Wind power is an important part of the renewable energy system. With the digitalization of industry, smart monitoring and operation is an important step towards efficient use of resources. Thus, Digital Twins (DT) should be applied to enhance power output.
Digital Twins for energy systems combine many fields of study, such as smart monitoring, big data technology, and advanced physical modeling. Frameworks for the structure of Digital Twins are many, but there are few standardized methods based on the experience of such developed Digital Twins.
An integrative review on the topic of Digital Twins with the goal of creating a conceptual development framework for DTs with open-source software is performed. However, the framework is yet to be tested experimentally but is nevertheless an important contribution toward the understanding of DT technology development.
The result of the review is a seven-step framework identifying potential components and methods needed to create a fully developed DT for the aerodynamics of a wind turbine. Suggested steps are Assessment, Create, Communicate, Aggregate, Analyze, Insight, and Act. The goal is that the framework can stimulate more research on digital twins for small-scale wind power. Thus, making small-scale wind power more accessible and affordable
Scheduling and Tuning Kernels for High-performance on Heterogeneous Processor Systems
Accelerated parallel computing techniques using devices such as GPUs and Xeon Phis (along with CPUs) have proposed promising solutions of extending the cutting edge of high-performance computer systems. A significant performance improvement can be achieved when suitable workloads are handled by the accelerator. Traditional CPUs can handle those workloads not well suited for accelerators. Combination of multiple types of processors in a single computer system is referred to as a heterogeneous system. This dissertation addresses tuning and scheduling issues in heterogeneous systems. The first section presents work on tuning scientific workloads on three different types of processors: multi-core CPU, Xeon Phi massively parallel processor, and NVIDIA GPU; common tuning methods and platform-specific tuning techniques are presented. Then, analysis is done to demonstrate the performance characteristics of the heterogeneous system on different input data. This section of the dissertation is part of the GeauxDock project, which prototyped a few state-of-art bioinformatics algorithms, and delivered a fast molecular docking program. The second section of this work studies the performance model of the GeauxDock computing kernel. Specifically, the work presents an extraction of features from the input data set and the target systems, and then uses various regression models to calculate the perspective computation time. This helps understand why a certain processor is faster for certain sets of tasks. It also provides the essential information for scheduling on heterogeneous systems. In addition, this dissertation investigates a high-level task scheduling framework for heterogeneous processor systems in which, the pros and cons of using different heterogeneous processors can complement each other. Thus a higher performance can be achieve on heterogeneous computing systems. A new scheduling algorithm with four innovations is presented: Ranked Opportunistic Balancing (ROB), Multi-subject Ranking (MR), Multi-subject Relative Ranking (MRR), and Automatic Small Tasks Rearranging (ASTR). The new algorithm consistently outperforms previously proposed algorithms with better scheduling results, lower computational complexity, and more consistent results over a range of performance prediction errors. Finally, this work extends the heterogeneous task scheduling algorithm to handle power capping feature. It demonstrates that a power-aware scheduler significantly improves the power efficiencies and saves the energy consumption. This suggests that, in addition to performance benefits, heterogeneous systems may have certain advantages on overall power efficiency
Recommended from our members
Regional-scale fault-to-structure earthquake simulations with the EQSIM framework: Workflow maturation and computational performance on GPU-accelerated exascale platforms
Continuous advancements in scientific and engineering understanding of earthquake phenomena, combined with the associated development of representative physics-based models, is providing a foundation for high-performance, fault-to-structure earthquake simulations. However, regional-scale applications of high-performance models have been challenged by the computational requirements at the resolutions required for engineering risk assessments. The EarthQuake SIMulation (EQSIM) framework, a software application development under the US Department of Energy (DOE) Exascale Computing Project, is focused on overcoming the existing computational barriers and enabling routine regional-scale simulations at resolutions relevant to a breadth of engineered systems. This multidisciplinary software development—drawing upon expertise in geophysics, engineering, applied math and computer science—is preparing the advanced computational workflow necessary to fully exploit the DOE’s exaflop computer platforms coming online in the 2023 to 2024 timeframe. Achievement of the computational performance required for high-resolution regional models containing upward of hundreds of billions to trillions of model grid points requires numerical efficiency in every phase of a regional simulation. This includes run time start-up and regional model generation, effective distribution of the computational workload across thousands of computer nodes, efficient coupling of regional geophysics and local engineering models, and application-tailored highly efficient transfer, storage, and interrogation of very large volumes of simulation data. This article summarizes the most recent advancements and refinements incorporated in the workflow design for the EQSIM integrated fault-to-structure framework, which are based on extensive numerical testing across multiple graphics processing unit (GPU)-accelerated platforms, and demonstrates the computational performance achieved on the world’s first exaflop computer platform through representative regional-scale earthquake simulations for the San Francisco Bay Area in California, USA
CFD Simulation and 3D Visualization on Cultural Heritage sites: The Castle of Mytilene
This paper presents a CFD and 3D Visualization pipeline to simulate wind flow over a heritage site and then visualize the results. As case study, a coarse 3D geometry model of the Fortress of Mytilene, Lesvos island, Greece and its surrounding was generated from open access Digital Elevation Models. The CFD simulation of the air flow on the wider heritage site area was performed using the steady-state version of the in-house flow solver IBOFlow (Immersed Boundary Octree Flow Solver) developed at Fraunhofer-Chalmers Research Centre. All the simulations were completed considering the mean wind direction and wind speed during the last 22 years from actual weather data retrieved from Open Weather Map. The visualization results were achieved through Unreal Engine, using built-in visualization tools and a tailored-made plugin to visualize the air flow over the monument and on the monument’s wall. As discussed in the conclusion section, the overall process proposed in this paper can be implemented for an initial assessment of the effect of environmental parameters over any heritage site and moreover, it may form the basis for valuable assistive tool for conservators and engineers
- …