7 research outputs found

    On the Scalability of Data Reduction Techniques in Current and Upcoming HPC Systems from an Application Perspective

    Full text link
    We implement and benchmark parallel I/O methods for the fully-manycore driven particle-in-cell code PIConGPU. Identifying throughput and overall I/O size as a major challenge for applications on today's and future HPC systems, we present a scaling law characterizing performance bottlenecks in state-of-the-art approaches for data reduction. Consequently, we propose, implement and verify multi-threaded data-transformations for the I/O library ADIOS as a feasible way to trade underutilized host-side compute potential on heterogeneous systems for reduced I/O latency.Comment: 15 pages, 5 figures, accepted for DRBSD-1 in conjunction with ISC'1

    Entwicklung eines Partikelvisualisierers fĂĽr In-Situ-Simulationen

    Get PDF
    Wissenschaftliche Computersimulationen auf Hochleistungsrechnern benötigen immer mehr Ressour- cen. So können auf einem aktuellen Peta- und zukünftigen Exascalesystem mehrere Terabyte pro Se- kunde an Daten erzeugt oder verändert werden. Durch die stark begrenzten Übertragungsraten in den meisten Netzwerken ist es kaum möglich die Daten bei jedem Zeitschritt zu speichern oder auf externen Ressourcen zu verarbeiten. Um nicht durch die Geschwindigkeit des Netzwerks begrenzt zu sein, wird bei aktuellen Ansätzen versucht die lokale Hardware möglichst gut auszunutzen und das Rendering auf den selben Ressourcen auszuführen, auf denen bereits die Simulation läuft. Hier setzt die am Helmholtz-Zentrum Dresden – Rossendorf (HZDR) entwickelte In-Situ- Visualisie- rungsbibliothek ISAAC an. Diese ermöglicht es auf mehrere GPUs oder CPUs verteilte Simulationen auf den selben Ressourcen verteilt zu visualisieren, so dass mehrere Einzelbilder entstehen, welche in einem Verarbeitungsschritt zusammengesetzt werden. Dadurch muss nur ein Bild und wenige Steuer- kommandos je Zeitschritt über das Netzwerk übertragen werden. Im Rahmen dieser Arbeit wird ISAAC um eine Möglichkeit des interaktiven Partikelrenderings erweitert. Dabei wird es ermöglicht mehrere Partikelquellen zu definieren und diese nach gewählten Eigenschaften einzufärben und zur Laufzeit zu filtern. Es können Partikelquellen zusätzlich zu den Volumenquellen in ISAAC ebenfalls zur Laufzeit aktiviert und deaktiviert werden, was auch eine kombinierte Betrachtung ermöglicht

    In situ, steerable, hardware-independent and data-structure agnostic visualization with ISAAC

    No full text
    The computation power of supercomputers grows faster than the bandwidth of their storage and network. In particular, applications using hardware accelerators like Nvidia GPUs cannot save enough data to be analyzed in a later step. There is a high risk of losing important scientific information. We introduce the in situ template library ISAAC which enables arbitrary applications like scientific simulations to live visualize their data without the need of deep copy operations or data transformation using the very same compute node and hardware accelerator the data is already residing on. Arbitrary meta data can be added to the renderings and user defined steering commands can be asynchronously sent back to the running application. Using an aggregating server, ISAAC streams the interactive visualization video and enables user to access their applications from everywhere

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    PIConGPU and ISAAC software and results bundle for Supercomputing frontiers and innovations submission 2016

    No full text
    This is the archive containing the software used for evaluations and the results of the publication "In situ, steerable, hardware-independent and data-structure agnostic visualization with ISAAC" submitted to the Supercomputing frontiers and innovations 2016 based on the talk at the ISC Workshop On In situ Visualization 2016. The archive has the following content: PIConGPU Kelvin-Helmholtz Simulation code (picongpu/): Remote: https://github.com/psychocoderHPC/picongpu.git (copy will be removed) Branch: topic-scalingPizDaintISAAC Commit: 500f896ff8dbed768b2e62800072f6416645fc8d The network communication code was removed for the evaluations on Piz Daint. The customized ISAAC version is part of PIConGPU (picongpu/src/picongpu/include/plugins/isaac/). It is based on the following Repository and Commit: Remote: https://github.com/ComputationalRadiationPhysics/isaac.git Branch: dev Commit: a381a31caf9cf568d33568efb2f83d356448abc9 The results of the Piz Daint run (results/). In the subfolder output is the raw simulation output and in the folder csv csv tables created out of these. The simulation was executed for 30 time steps and the following configuration: shape is higher then CIC, we used TSC pusher is Boris current solver is Esirkepov (optimized, generalized) Yee field solver trilinear interpolation in field gathering 16 particles per cell Compile flags: CPU g++-4.9.2: -g -O3 -m64 GPU nvcc: --use_fast_math --ftz=false -g -Xcompiler=-pthread -O3 -m6
    corecore