365 research outputs found

    Methods and design issues for next generation network-aware applications

    Get PDF
    Networks are becoming an essential component of modern cyberinfrastructure and this work describes methods of designing distributed applications for high-speed networks to improve application scalability, performance and capabilities. As the amount of data generated by scientific applications continues to grow, to be able to handle and process it, applications should be designed to use parallel, distributed resources and high-speed networks. For scalable application design developers should move away from the current component-based approach and implement instead an integrated, non-layered architecture where applications can use specialized low-level interfaces. The main focus of this research is on interactive, collaborative visualization of large datasets. This work describes how a visualization application can be improved through using distributed resources and high-speed network links to interactively visualize tens of gigabytes of data and handle terabyte datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps (up to 30 times faster than local I/O). Motivated by the distributed visualization application, this work also researches remote data access systems. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system. Transport protocols are compared to understand which protocol can best utilize high-speed network connections, concluding that a rate-based protocol is the best solution, being 8 times faster than standard TCP. An HD-based remote teaching application experiment is conducted, illustrating the potential of network-aware applications in a production environment. Future research areas are presented, with emphasis on network-aware optimization, execution and deployment scenarios

    An innovative collaborative high-performance platform for simulation

    Get PDF
    This paper presents an innovative collaborative visualization platform for the simulation-based design applications. Following the scope and the main objectives, the general architecture based on the internet standard technologies is explained. Based on a multi-domain approach, several demonstrators are involved crossing interests of industrial and academic communities. Related to the field of process engineering, we adapt and deploy a web-based architecture research application on the targeted platform

    Real-time visualization of parallel simulations in CERN material design

    Get PDF
    This work presents the implementation of the in situ visualization module for multiscale-multiphysics simulation code FEMOCS and demonstrates its behavior in the simulation of vacuum breakdown. The visualization module makes it possible to observe in real-time the course of the simulation in FEMOCS and makes it more straightforward to set up a new simulation or develop additional features into the code. The first and second chapters briefly introduce the vacuum breakdown phenomenon and describe general aspects of numerical simulations. The third chapter describes the in situ method as a way of improving FEMOCS. The fourth and fifth chapters present the final solution and the impact of the solution on the overall running time of the simulation

    GPU Accelerated Particle Visualization with Splotch

    Get PDF
    Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced for data transfers, computations and memory access, to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organisation and classification of particles. We deploy a reference simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimisations and exploitation of emerging technologies.Comment: 25 pages, 9 figures. Astronomy and Computing (2014

    I-Light Symposium 2005 Proceedings

    Get PDF
    I-Light was made possible by a special appropriation by the State of Indiana. The research described at the I-Light Symposium has been supported by numerous grants from several sources. Any opinions, findings and conclusions, or recommendations expressed in the 2005 I-Light Symposium Proceedings are those of the researchers and authors and do not necessarily reflect the views of the granting agencies.Indiana University Office of the Vice President for Research and Information Technology, Purdue University Office of the Vice President for Information Technology and CI

    Immersive Visualization for Enhanced Computational Fluid Dynamics Analysis

    Get PDF
    Modern biomedical computer simulations produce spatiotemporal results that are often viewed at a single point in time on standard 2D displays. An immersive visualization environment (IVE) with 3D stereoscopic capability can mitigate some shortcomings of 2D displays via improved depth cues and active movement to further appreciate the spatial localization of imaging data with temporal computational fluid dynamics (CFD) results. We present a semi-automatic workflow for the import, processing, rendering, and stereoscopic visualization of high resolution, patient-specific imaging data, and CFD results in an IVE. Versatility of the workflow is highlighted with current clinical sequelae known to be influenced by adverse hemodynamics to illustrate potential clinical utility

    Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT): Semi-Annual Progress Report

    Full text link

    MEVA - An interactive visualization application for validation of multifaceted meteorological data with multiple 3D devices

    No full text
    To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work
    corecore