2,309 research outputs found

    VisIVO - Integrated Tools and Services for Large-Scale Astrophysical Visualization

    Full text link
    VisIVO is an integrated suite of tools and services specifically designed for the Virtual Observatory. This suite constitutes a software framework for effective visual discovery in currently available (and next-generation) very large-scale astrophysical datasets. VisIVO consists of VisiVO Desktop - a stand alone application for interactive visualization on standard PCs, VisIVO Server - a grid-enabled platform for high performance visualization and VisIVO Web - a custom designed web portal supporting services based on the VisIVO Server functionality. The main characteristic of VisIVO is support for high-performance, multidimensional visualization of very large-scale astrophysical datasets. Users can obtain meaningful visualizations rapidly while preserving full and intuitive control of the relevant visualization parameters. This paper focuses on newly developed integrated tools in VisIVO Server allowing intuitive visual discovery with 3D views being created from data tables. VisIVO Server can be installed easily on any web server with a database repository. We discuss briefly aspects of our implementation of VisiVO Server on a computational grid and also outline the functionality of the services offered by VisIVO Web. Finally we conclude with a summary of our work and pointers to future developments

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    High Performance Air Quality Simulation in the European CrossGrid Project

    Get PDF
    This paper focuses on one of the applications involved into the CrossGrid project, the STEM-II air pollution model used to simulate the environment of As Pontes Power Plant in A Coruna (Spain). The CrossGrid project offers us a Grid environment oriented towards computation- and data-intensive applications that need interaction with an external user. The air pollution model needs the interaction of an expert in order to make decisions about modifications in the industrial process to fulfil the European standard on emissions and air quality. The benefits of using different CrossGrid components for running the application on a Grid infrastructure are shown in this paper, and some preliminary results on the CrossGrid testbed are displayed

    A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Get PDF
    Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.Ingeniería, Industria y Construcció

    A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    Get PDF

    A Supportive Framework for the Development of a Digital Twin for Wind Turbines Using Open-Source Software Tiril Malmedal Mechanics and Process Technology

    Get PDF
    The world is facing a global climate crisis. Renewable energy is one of the big solutions, nevertheless, there are technological challenges. Wind power is an important part of the renewable energy system. With the digitalization of industry, smart monitoring and operation is an important step towards efficient use of resources. Thus, Digital Twins (DT) should be applied to enhance power output. Digital Twins for energy systems combine many fields of study, such as smart monitoring, big data technology, and advanced physical modeling. Frameworks for the structure of Digital Twins are many, but there are few standardized methods based on the experience of such developed Digital Twins. An integrative review on the topic of Digital Twins with the goal of creating a conceptual development framework for DTs with open-source software is performed. However, the framework is yet to be tested experimentally but is nevertheless an important contribution toward the understanding of DT technology development. The result of the review is a seven-step framework identifying potential components and methods needed to create a fully developed DT for the aerodynamics of a wind turbine. Suggested steps are Assessment, Create, Communicate, Aggregate, Analyze, Insight, and Act. The goal is that the framework can stimulate more research on digital twins for small-scale wind power. Thus, making small-scale wind power more accessible and affordable

    Scheduling and Tuning Kernels for High-performance on Heterogeneous Processor Systems

    Get PDF
    Accelerated parallel computing techniques using devices such as GPUs and Xeon Phis (along with CPUs) have proposed promising solutions of extending the cutting edge of high-performance computer systems. A significant performance improvement can be achieved when suitable workloads are handled by the accelerator. Traditional CPUs can handle those workloads not well suited for accelerators. Combination of multiple types of processors in a single computer system is referred to as a heterogeneous system. This dissertation addresses tuning and scheduling issues in heterogeneous systems. The first section presents work on tuning scientific workloads on three different types of processors: multi-core CPU, Xeon Phi massively parallel processor, and NVIDIA GPU; common tuning methods and platform-specific tuning techniques are presented. Then, analysis is done to demonstrate the performance characteristics of the heterogeneous system on different input data. This section of the dissertation is part of the GeauxDock project, which prototyped a few state-of-art bioinformatics algorithms, and delivered a fast molecular docking program. The second section of this work studies the performance model of the GeauxDock computing kernel. Specifically, the work presents an extraction of features from the input data set and the target systems, and then uses various regression models to calculate the perspective computation time. This helps understand why a certain processor is faster for certain sets of tasks. It also provides the essential information for scheduling on heterogeneous systems. In addition, this dissertation investigates a high-level task scheduling framework for heterogeneous processor systems in which, the pros and cons of using different heterogeneous processors can complement each other. Thus a higher performance can be achieve on heterogeneous computing systems. A new scheduling algorithm with four innovations is presented: Ranked Opportunistic Balancing (ROB), Multi-subject Ranking (MR), Multi-subject Relative Ranking (MRR), and Automatic Small Tasks Rearranging (ASTR). The new algorithm consistently outperforms previously proposed algorithms with better scheduling results, lower computational complexity, and more consistent results over a range of performance prediction errors. Finally, this work extends the heterogeneous task scheduling algorithm to handle power capping feature. It demonstrates that a power-aware scheduler significantly improves the power efficiencies and saves the energy consumption. This suggests that, in addition to performance benefits, heterogeneous systems may have certain advantages on overall power efficiency

    CFD Simulation and 3D Visualization on Cultural Heritage sites: The Castle of Mytilene

    Get PDF
    This paper presents a CFD and 3D Visualization pipeline to simulate wind flow over a heritage site and then visualize the results. As case study, a coarse 3D geometry model of the Fortress of Mytilene, Lesvos island, Greece and its surrounding was generated from open access Digital Elevation Models. The CFD simulation of the air flow on the wider heritage site area was performed using the steady-state version of the in-house flow solver IBOFlow (Immersed Boundary Octree Flow Solver) developed at Fraunhofer-Chalmers Research Centre. All the simulations were completed considering the mean wind direction and wind speed during the last 22 years from actual weather data retrieved from Open Weather Map. The visualization results were achieved through Unreal Engine, using built-in visualization tools and a tailored-made plugin to visualize the air flow over the monument and on the monument’s wall. As discussed in the conclusion section, the overall process proposed in this paper can be implemented for an initial assessment of the effect of environmental parameters over any heritage site and moreover, it may form the basis for valuable assistive tool for conservators and engineers
    • …
    corecore