38 research outputs found

    Visualization Tools for Lattice QCD - Final Report

    Full text link
    Our research project is about the development of visualization tools for Lattice QCD. We developed various tools by extending existing libraries, adding new algorithms, exposing new APIs, and creating web interfaces (including the new NERSC gauge connection web site). Our tools cover the full stack of operations from automating download of data, to generating VTK #12;files (topological charge, plaquette, Polyakov lines, quark and meson propagators, currents), to turning the VTK #12;files into images, movies, and web pages. Some of the tools have their own web interfaces. Some Lattice QCD visualization have been created in the past but, to our knowledge, our tools are the only ones of their kind since they are general purpose, customizable, and relatively easy to use. We believe they will be valuable to physicists working in the #12;field. They can be used to better teach Lattice QCD concepts to new graduate students; they can be used to observe the changes in topological charge density and detect possible sources of bias in computations; they can be used to observe the convergence of the algorithms at a local level and determine possible problems; they can be used to probe heavy-light mesons with currents and determine their spatial distribution; they can be used to detect corrupted gauge configurations. There are some indirect results of this grant that will benefit a broader audience than Lattice QCD physicists

    Requirement analysis for building practical accident warning systems based on vehicular ad-hoc networks

    Get PDF
    An Accident Warning System (AWS) is a safety application that provides collision avoidance notifications for next generation vehicles whilst Vehicular Ad-hoc Networks (VANETs) provide the communication functionality to exchange these notifi- cations. Despite much previous research, there is little agreement on the requirements for accident warning systems. In order to build a practical warning system, it is important to ascertain the system requirements, information to be exchanged, and protocols needed for communication between vehicles. This paper presents a practical model of an accident warning system by stipulating the requirements in a realistic manner and thoroughly reviewing previous proposals with a view to identify gaps in this area

    National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report

    Get PDF
    Under its SciDAC-1 and SciDAC-2 grants, the USQCD Collaboration developed software and algorithmic infrastructure for the numerical study of lattice gauge theories

    VGC 2023 - Unveiling the dynamic Earth with digital methods: 5th Virtual Geoscience Conference: Book of Abstracts

    Get PDF
    Conference proceedings of the 5th Virtual Geoscience Conference, 21-22 September 2023, held in Dresden. The VGC is a multidisciplinary forum for researchers in geoscience, geomatics and related disciplines to share their latest developments and applications.:Short Courses 9 Workshops Stream 1 10 Workshop Stream 2 11 Workshop Stream 3 12 Session 1 – Point Cloud Processing: Workflows, Geometry & Semantics 14 Session 2 – Visualisation, communication & Teaching 27 Session 3 – Applying Machine Learning in Geosciences 36 Session 4 – Digital Outcrop Characterisation & Analysis 49 Session 5 – Airborne & Remote Mapping 58 Session 6 – Recent Developments in Geomorphic Process and Hazard Monitoring 69 Session 7 – Applications in Hydrology & Ecology 82 Poster Contributions 9

    Visualization of Time-Varying Data from Atomistic Simulations and Computational Fluid Dynamics

    Get PDF
    Time-varying data from simulations of dynamical systems are rich in spatio-temporal information. A key challenge is how to analyze such data for extracting useful information from the data and displaying spatially evolving features in the space-time domain of interest. We develop/implement multiple approaches toward visualization-based analysis of time-varying data obtained from two common types of dynamical simulations: molecular dynamics (MD) and computational fluid dynamics (CFD). We also make application case studies. Parallel first-principles molecular dynamics simulations produce massive amounts of time-varying three-dimensional scattered data representing atomic (molecular) configurations for material system being simulated. Rendering the atomic position-time series along with the extracted additional information helps us understand the microscopic processes in complex material system at atomic length and time scales. Radial distribution functions, coordination environments, and clusters are computed and rendered for visualizing structural behavior of the simulated material systems. Atom (particle) trajectories and displacement data are extracted and rendered for visualizing dynamical behavior of the system. While improving our atomistic visualization system to make it versatile, stable and scalable, we focus mainly on atomic trajectories. Trajectory rendering can represent complete simulation information in a single display; however, trajectories get crowded and the associated clutter/occlusion problem becomes serious for even moderate data size. We present and assess various approaches for clutter reduction including constrained rendering, basic and adaptive position merging, and information encoding. Data model with HDF5 and partial I/O, and GLSL shading are adopted to enhance the rendering speed and quality of the trajectories. For applications, a detailed visualization-based analysis is carried out for simulated silicate melts such as model basalt systems. On the other hand, CFD produces temporally and spatially resolved numerical data for fluid systems consisting of a million to tens of millions of cells (mesh points). We implement time surfaces (in particular, evolving surfaces of spheres) for visualizing the vector (flow) field to study the simulated mixing of fluids in the stirred tank

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Annual Report, 2017-2018

    Get PDF
    corecore