7 research outputs found

    Visuelle Analyse großer Partikeldaten

    Get PDF
    Partikelsimulationen sind eine bewĂ€hrte und weit verbreitete numerische Methode in der Forschung und Technik. Beispielsweise werden Partikelsimulationen zur Erforschung der KraftstoffzerstĂ€ubung in Flugzeugturbinen eingesetzt. Auch die Entstehung des Universums wird durch die Simulation von dunkler Materiepartikeln untersucht. Die hierbei produzierten Datenmengen sind immens. So enthalten aktuelle Simulationen Billionen von Partikeln, die sich ĂŒber die Zeit bewegen und miteinander interagieren. Die Visualisierung bietet ein großes Potenzial zur Exploration, Validation und Analyse wissenschaftlicher DatensĂ€tze sowie der zugrundeliegenden Modelle. Allerdings liegt der Fokus meist auf strukturierten Daten mit einer regulĂ€ren Topologie. Im Gegensatz hierzu bewegen sich Partikel frei durch Raum und Zeit. Diese Betrachtungsweise ist aus der Physik als das lagrange Bezugssystem bekannt. Zwar können Partikel aus dem lagrangen in ein regulĂ€res eulersches Bezugssystem, wie beispielsweise in ein uniformes Gitter, konvertiert werden. Dies ist bei einer großen Menge an Partikeln jedoch mit einem erheblichen Aufwand verbunden. DarĂŒber hinaus fĂŒhrt diese Konversion meist zu einem Verlust der PrĂ€zision bei gleichzeitig erhöhtem Speicherverbrauch. Im Rahmen dieser Dissertation werde ich neue Visualisierungstechniken erforschen, welche speziell auf der lagrangen Sichtweise basieren. Diese ermöglichen eine effiziente und effektive visuelle Analyse großer Partikeldaten

    Highly Parallel Geometric Characterization and Visualization of Volumetric Data Sets

    Get PDF
    Volumetric 3D data sets are being generated in many different application areas. Some examples are CAT scans and MRI data, 3D models of protein molecules represented by implicit surfaces, multi-dimensional numeric simulations of plasma turbulence, and stacks of confocal microscopy images of cells. The size of these data sets has been increasing, requiring the speed of analysis and visualization techniques to also increase to keep up. Recent advances in processor technology have stopped increasing clock speed and instead begun increasing parallelism, resulting in multi-core CPUS and many-core GPUs. To take advantage of these new parallel architectures, algorithms must be explicitly written to exploit parallelism. In this thesis we describe several algorithms and techniques for volumetric data set analysis and visualization that are amenable to these modern parallel architectures. We first discuss modeling volumetric data with Gaussian Radial Basis Functions (RBFs). RBF representation of a data set has several advantages, including lossy compression, analytic differentiability, and analytic application of Gaussian blur. We also describe a parallel volume rendering algorithm that can create images of the data directly from the RBF representation. Next we discuss a parallel, stochastic algorithm for measuring the surface area of volumetric representations of molecules. The algorithm is suitable for implementation on a GPU and is also progressive, allowing it to return a rough answer almost immediately and refine the answer over time to the desired level of accuracy. After this we discuss the concept of Confluent Visualization, which allows the visualization of the interaction between a pair of volumetric data sets. The interaction is visualized through volume rendering, which is well suited to implementation on parallel architectures. Finally we discuss a parallel, stochastic algorithm for classifying stem cells as having been grown on a surface that induces differentiation or on a surface that does not induce differentiation. The algorithm takes as input 3D volumetric models of the cells generated from confocal microscopy. This algorithm builds on our algorithm for surface area measurement and, like that algorithm, this algorithm is also suitable for implementation on a GPU and is progressive

    Real-time rendering of large surface-scanned range data natively on a GPU

    Get PDF
    This thesis presents research carried out for the visualisation of surface anatomy data stored as large range images such as those produced by stereo-photogrammetric, and other triangulation-based capture devices. As part of this research, I explored the use of points as a rendering primitive as opposed to polygons, and the use of range images as the native data representation. Using points as a display primitive as opposed to polygons required the creation of a pipeline that solved problems associated with point-based rendering. The problems inves tigated were scattered-data interpolation (a common problem with point-based rendering), multi-view rendering, multi-resolution representations, anti-aliasing, and hidden-point re- moval. In addition, an efficient real-time implementation on the GPU was carried out

    Interactive volume ray tracing

    Get PDF
    Die Visualisierung von volumetrischen Daten ist eine der interessantesten, aber sicherlich auch schwierigsten Anwendungsgebiete innerhalb der wissenschaftlichen Visualisierung. Im Gegensatz zu OberflĂ€chenmodellen, reprĂ€sentieren solche Daten ein semi-transparentes Medium in einem 3D-Feld. Anwendungen reichen von medizinischen Untersuchungen, Simulation physikalischer Prozesse bis hin zur visuellen Kunst. Viele dieser Anwendungen verlangen InteraktivitĂ€t hinsichtlich Darstellungs- und Visualisierungsparameter. Der Ray-Tracing- (Stahlverfolgungs-) Algorithmus wurde dabei, obwohl er inhĂ€rent die Interaktion mit einem solchen Medium simulieren kann, immer als zu langsam angesehen. Die meisten Forscher konzentrierten sich vielmehr auf RasterisierungsansĂ€tze, da diese besser fĂŒr Grafikkarten geeignet sind. Dabei leiden diese AnsĂ€tze entweder unter einer ungenĂŒgenden QualitĂ€t respektive FlexibilitĂ€t. Die andere Alternative besteht darin, den Ray-Tracing-Algorithmus so zu beschleunigen, dass er sinnvoll fĂŒr Visualisierungsanwendungen benutzt werden kann. Seit der VerfĂŒgbarkeit moderner Grafikkarten hat die Forschung auf diesem Gebiet nachgelassen, obwohl selbst moderne GPUs immer noch Limitierungen, wie beispielsweise der begrenzte Grafikkartenspeicher oder das umstĂ€ndliche Programmiermodell, enthalten. Die beiden in dieser Arbeit vorgestellten Methoden sind deshalb vollstĂ€ndig softwarebasiert, da es sinnvoller erscheint, möglichst viele Optimierungen in Software zu realisieren, bevor eine Portierung auf Hardware erfolgt. Die erste Methode wird impliziter Kd-Baum genannt, eine hierarchische und rĂ€umliche Beschleunigungstruktur, die ursprĂŒnglich fĂŒr die Generierung von IsoflĂ€chen regulĂ€re GitterdatensĂ€tze entwickelt wurde. In der Zwischenzeit unterstĂŒtzt sie auch die semi-transparente Darstellung, die Darstellung von zeitabhĂ€ngigen DatensĂ€tzen und wurde erfolgreich fĂŒr andere Anwendungen eingesetzt. Der zweite Algorithmus benutzt so genannte PlĂŒcker-Koordinaten, welche die Implementierung eines schnellen inkrementellen Traversierers fĂŒr DatensĂ€tze erlauben, deren Primitive Tetraeder beziehungsweise Hexaeder sind. Beide Algorithmen wurden wesentlich optimiert, um eine interaktive Bildgenerierung volumetrischer Daten zu ermöglichen und stellen deshalb einen wichtigen Beitrag hin zu einem flexiblen und interaktiven Volumen-Ray-Tracing-System dar.Volume rendering is one of the most demanding and interesting topics among scientific visualization. Applications include medical examinations, simulation of physical processes, and visual art. Most of these applications demand interactivity with respect to the viewing and visualization parameters. The ray tracing algorithm, although inherently simulating light interaction with participating media, was always considered too slow. Instead, most researchers followed object-order algorithms better suited for graphics adapters, although such approaches often suffer either from low quality or lack of flexibility. Another alternative is to speed up the ray tracing algorithm to make it competitive for volumetric visualization tasks. Since the advent of modern graphic adapters, research in this area had somehow ceased, although some limitations of GPUs, e.g. limited graphics board memory and tedious programming model, are still a problem. The two methods discussed in this thesis are therefore purely software-based since it is believed that software implementations allow for a far better optimization process before porting algorithms to hardware. The first method is called implicit kd-tree, which is a hierarchical spatial acceleration structure originally developed for iso-surface rendering of regular data sets that now supports semi-transparent rendering, time-dependent data visualization, and is even used in non volume-rendering applications. The second algorithm uses so-called PlĂŒcker coordinates, providing a fast incremental traversal for data sets consisting of tetrahedral or hexahedral primitives. Both algorithms are highly optimized to support interactive rendering of volumetric data sets and are therefore major contributions towards a flexible and interactive volume ray tracing framework

    GPU-Accelerated Volume Splatting With Elliptical RBFs

    No full text
    Radial Basis Functions (RBFs) have become a popular rendering primitive, both in surface and in volume rendering. This paper focuses on volume visualization, giving rise to 3D kernels. RBFs are especially convenient for the representation of scattered and irregularly distributed point samples, where the RBF kernel is used as a blending function for the space in between samples. Common representations employ radially symmetric RBFs, and various techniques have been introduced to render these, also with efficient implementations on programmable graphics hardware (GPUs). In this paper, we extend the existing work to more generalized, ellipsoidal RBF kernels, for the rendering of scattered volume data. We devise a post-shaded kernel-centric rendering approach, specifically designed to run efficiently on GPUs, and we demonstrate our renderer using datasets from subdivision volumes and computational science. Categories and Subject Descriptors (according to ACM CSS): I.3.3 [Computer Graphics]: Display Algorithms 1

    Microgravity Science and Applications: Program Tasks and Bibliography for Fiscal Year 1996

    Get PDF
    NASA's Microgravity Science and Applications Division (MSAD) sponsors a program that expands the use of space as a laboratory for the study of important physical, chemical, and biochemical processes. The primary objective of the program is to broaden the value and capabilities of human presence in space by exploiting the unique characteristics of the space environment for research. However, since flight opportunities are rare and flight research development is expensive, a vigorous ground-based research program, from which only the best experiments evolve, is critical to the continuing strength of the program. The microgravity environment affords unique characteristics that allow the investigation of phenomena and processes that are difficult or impossible to study an Earth. The ability to control gravitational effects such as buoyancy driven convection, sedimentation, and hydrostatic pressures make it possible to isolate phenomena and make measurements that have significantly greater accuracy than can be achieved in normal gravity. Space flight gives scientists the opportunity to study the fundamental states of physical matter-solids, liquids and gasses-and the forces that affect those states. Because the orbital environment allows the treatment of gravity as a variable, research in microgravity leads to a greater fundamental understanding of the influence of gravity on the world around us. With appropriate emphasis, the results of space experiments lead to both knowledge and technological advances that have direct applications on Earth. Microgravity research also provides the practical knowledge essential to the development of future space systems. The Office of Life and Microgravity Sciences and Applications (OLMSA) is responsible for planning and executing research stimulated by the Agency's broad scientific goals. OLMSA's Microgravity Science and Applications Division (MSAD) is responsible for guiding and focusing a comprehensive program, and currently manages its research and development tasks through five major scientific areas: biotechnology, combustion science, fluid physics, fundamental physics, and materials science. FY 1996 was an important year for MSAD. NASA continued to build a solid research community for the coming space station era. During FY 1996, the NASA Microgravity Research Program continued investigations selected from the 1994 combustion science, fluid physics, and materials science NRAS. MSAD also released a NASA Research Announcement in microgravity biotechnology, with more than 130 proposals received in response. Selection of research for funding is expected in early 1997. The principal investigators chosen from these NRAs will form the core of the MSAD research program at the beginning of the space station era. The third United States Microgravity Payload (USMP-3) and the Life and Microgravity Spacelab (LMS) missions yielded a wealth of microgravity data in FY 1996. The USMP-3 mission included a fluids facility and three solidification furnaces, each designed to examine a different type of crystal growth
    corecore