41 research outputs found

    General Purpose Flow Visualization at the Exascale

    Get PDF
    Exascale computing, i.e., supercomputers that can perform 1018 math operations per second, provide significant opportunity for improving the computational sciences. That said, these machines can be difficult to use efficiently, due to their massive parallelism, due to the use of accelerators, and due to the diversity of accelerators used. All areas of the computational science stack need to be reconsidered to address these problems. With this dissertation, we consider flow visualization, which is critical for analyzing vector field data from simulations. We specifically consider flow visualization techniques that use particle advection, i.e., tracing particle trajectories, which presents performance and implementation challenges. The dissertation makes four primary contributions. First, it synthesizes previous work on particle advection performance and introduces a high-level analytical cost model. Second, it proposes an approach for performance portability across accelerators. Third, it studies expected speedups based on using accelerators, including the importance of factors such as duration, particle count, data set, and others. Finally, it proposes an exascale-capable particle advection system that addresses diversity in many dimensions, including accelerator type, parallelism approach, analysis use case, underlying vector field, and more

    Institute for Scientific Computing Research Annual Report: Fiscal Year 2004

    Full text link

    ISCR Annual Report: Fical Year 2004

    Full text link

    Advanced Simulation and Computing FY12-13 Implementation Plan, Volume 2, Revision 0.5

    Full text link

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    Full text link

    Hypersweeps, Convective Clouds and Reeb Spaces

    Get PDF
    Isosurfaces are one of the most prominent tools in scientific data visualisation. An isosurface is a surface that defines the boundary of a feature of interest in space for a given threshold. This is integral in analysing data from the physical sciences which observe and simulate three or four dimensional phenomena. However it is time consuming and impractical to discover surfaces of interest by manually selecting different thresholds. The systematic way to discover significant isosurfaces in data is with a topological data structure called the contour tree. The contour tree encodes the connectivity and shape of each isosurface at all possible thresholds. The first part of this work has been devoted to developing algorithms that use the contour tree to discover significant features in data using high performance computing systems. Those algorithms provided a clear speedup over previous methods and were used to visualise physical plasma simulations. A major limitation of isosurfaces and contour trees is that they are only applicable when a single property is associated with data points. However scientific data sets often take multiple properties into account. A recent breakthrough generalised isosurfaces to fiber surfaces. Fiber surfaces define the boundary of a feature where the threshold is defined in terms of multiple parameters, instead of just one. In this work we used fiber surfaces together with isosurfaces and the contour tree to create a novel application that helps atmosphere scientists visualise convective cloud formation. Using this application, they were able to, for the first time, visualise the physical properties of certain structures that trigger cloud formation. Contour trees can also be generalised to handle multiple parameters. The natural extension of the contour tree is called the Reeb space and it comes from the pure mathematical field of fiber topology. The Reeb space is not yet fully understood mathematically and algorithms for computing it have significant practical limitations. A key difficulty is that while the contour tree is a traditional one dimensional data structure made up of points and lines between them, the Reeb space is far more complex. The Reeb space is made up of two dimensional sheets, attached to each other in intricate ways. The last part of this work focuses on understanding the structure of Reeb spaces and the rules that are followed when sheets are combined. This theory builds towards developing robust combinatorial algorithms to compute and use Reeb spaces for practical data analysis

    CELLmicrocosmos - Integrative cell modeling at the  molecular, mesoscopic and functional level

    Get PDF
    Sommer B. CELLmicrocosmos - Integrative cell modeling at the  molecular, mesoscopic and functional level. Bielefeld: Bielefeld University; 2012.The modeling of cells is an important application area of Systems Biology. In the context of this work, three cytological levels are defined: the mesoscopic, the molecular and the functional level. A number of related approaches which are quite diverse will be introduced during this work which can be categorized into these disciplines. But none of these approaches covers all areas. In this work, the combination of all three aforementioned cytological levels is presented, realized by the CELLmicrocosmos project, combining and extending different Bioinformatics-related methods. The mesoscopic level is covered by CellEditor which is a simple tool to generate eukaryotic or prokaryotic cell models. These are based on cell components represented by three-dimensional shapes. Different methods to generate these shapes are discussed by using partly external tools such as Amira, 3ds Max and/or Blender; abstract, interpretative, 3D-microscopy-based and molecular-structure-based cell component modeling. To communicate with these tools, CellEditor provides import as well as export capabilities based on the VRML97 format. In addition, different cytological coloring methods are discussed which can be applied to the cell models. MembraneEditor operates at the molecular level. This tool solves heterogeneous Membrane Packing Problems by distributing lipids on rectangular areas using collision detection. It provides fast and intuitive methods supporting a wide range of different application areas based on the PDB format. Moreover, a plugin interface enables the use of custom algorithms. In the context of this work, a high-density-generating lipid packing algorithm is evaluated; The Wanderer. The semi-automatic integration of proteins into the membrane is enabled by using data from the OPM and PDBTM database. Contrasting with the aforementioned structural levels, the third level covers the functional aspects of the cell. Here, protein-related networks or data sets can be imported and mapped into the previously generated cell models using the PathwayIntegration. For this purpose, data integration methods are applied, represented by the data warehouse DAWIS-M.D. which includes a number of established databases. This information is enriched by the text-mining data acquired from the ANDCell database. The localization of proteins is supported by different tools like the interactive Localization Table and the Localization Charts. The correlation of partly multi-layered cell components with protein-related networks is covered by the Network Mapping Problem. A special implementation of the ISOM layout is used for this purpose. Finally, a first approach to combine all these interrelated levels is represented; CellExplorer which integrates CellEditor as well as PathwayIntegration and imports structures generated with MembraneEditor. For this purpose, the shape-based cell components can be correlated with networks as well as molecular membrane structures using Membrane Mapping. It is shown that the tools discussed here can be applied to scientific as well as educational tasks: educational cell visualization, initial membrane modeling for molecular simulations, analysis of interrelated protein sets, cytological disease mapping. These are supported by the user-friendly combination of Java, Java 3D and Web Start technology. In the last part of this thesis the future of Integrative Cell Modeling is discussed. While the approaches discussed here represent basically three-dimensional snapshots of the cell, prospective approaches have to be extended into the fourth dimension; time
    corecore