31 research outputs found

    Efficient algorithms for the realistic simulation of fluids

    Get PDF
    Nowadays there is great demand for realistic simulations in the computer graphics field. Physically-based animations are commonly used, and one of the more complex problems in this field is fluid simulation, more so if real-time applications are the goal. Videogames, in particular, resort to different techniques that, in order to represent fluids, just simulate the consequence and not the cause, using procedural or parametric methods and often discriminating the physical solution. This need motivates the present thesis, the interactive simulation of free-surface flows, usually liquids, which are the feature of interest in most common applications. Due to the complexity of fluid simulation, in order to achieve real-time framerates, we have resorted to use the high parallelism provided by actual consumer-level GPUs. The simulation algorithm, the Lattice Boltzmann Method, has been chosen accordingly due to its efficiency and the direct mapping to the hardware architecture because of its local operations. We have created two free-surface simulations in the GPU: one fully in 3D and another restricted only to the upper surface of a big bulk of fluid, limiting the simulation domain to 2D. We have extended the latter to track dry regions and is also coupled with obstacles in a geometry-independent fashion. As it is restricted to 2D, the simulation loses some features due to the impossibility of simulating vertical separation of the fluid. To account for this we have coupled the surface simulation to a generic particle system with breaking wave conditions; the simulations are totally independent and only the coupling binds the LBM with the chosen particle system. Furthermore, the visualization of both systems is also done in a realistic way within the interactive framerates; raycasting techniques are used to provide the expected light-related effects as refractions, reflections and caustics. Other techniques that improve the overall detail are also applied as low-level detail ripples and surface foam

    Exploiting spatial and temporal coherence in GPU-based volume rendering

    Full text link
    Effizienz spielt eine wichtige Rolle bei der Darstellung von Volumendaten, selbst wenn leistungsstarke Grafikhardware zur Verfügung steht, da steigende Datensatzgrößen und höhere Anforderungen an Visualisierungstechniken Fortschritte bei Grafikprozessoren ausgleichen. In dieser Dissertation wird untersucht, wie räumliche und zeitliche Kohärenz in Volumendaten zur Optimierung von Volumenrendering genutzt werden kann. Es werden mehrere neue Ansätze für statische und zeitvariante Daten eingeführt, die verschieden Arten von Kohärenz in verschiedenen Stufen der Volumenrendering-Pipeline ausnutzen. Zu den vorgestellten Beschleunigungstechniken gehört Empty Space Skipping mittels Occlusion Frustums, eine auf Slabs basierende Cachestruktur für Raycasting und ein verlustfreies Kompressionsscheme für zeitvariante Daten. Die Algorithmen wurden zur Verwendung mit GPU-basiertem Volumen-Raycasting entworfen und nutzen die Fähigkeiten moderner Grafikprozessoren, insbesondere Stream Processing. Efficiency is a key aspect in volume rendering, even if powerful graphics hardware is employed, since increasing data set sizes and growing demands on visualization techniques outweigh improvements in graphics processor performance. This dissertation examines how spatial and temporal coherence in volume data can be used to optimize volume rendering. Several new approaches for static as well as for time-varying data sets are introduced, which exploit different types of coherence in different stages of the volume rendering pipeline. The presented acceleration algorithms include empty space skipping using occlusion frustums, a slab-based cache structure for raycasting, and a lossless compression scheme for time-varying data. The algorithms were designed for use with GPU-based volume raycasting and to efficiently exploit the features of modern graphics processors, especially stream processing

    An Interactive Concave Volume Clipping Method Based on GPU Ray Casting with Boolean Operation

    Get PDF
    Volume clipping techniques can display inner structures and avoid difficulties in specifying an appropriate transfer function. We present an interactive concave volume clipping method by implementing both rendering and Boolean operation on GPU. Common analytical convex objects, such as polyhedrons and spheres, are determined by parameters. So it consumes very little video memory to implement concave volume clipping with Boolean operations on GPU. The intersection, subtraction and union operations are implemented on GPU by converting 3D Boolean operation into 1D Boolean operation. To enhance visual effects, a pseudo color based rendering model is proposed and the Phong illumination model is enabled on the clipped surfaces. Users are allowed to select a color scheme from several schemes that are pre-defined or specified by users, to obtain clear views of inner anatomical structures. At last, several experiments were performed on a standard PC with a GeForce FX8600 graphics card. Experimental results show that the three basic Boolean operations are correctly performed, and our approach can freely clip and visualize volumetric datasets at interactive frame rates

    Interactive volume visualization with WebGl

    Get PDF
    Web-based applications have become increasingly popular in many areas and advances in web-based 3D graphics were made accordingly. In this context, we present a web based implementation of volume rendering using the relatively new WebGL API for interactive 3D graphics. An overview of the theoretical background of volume rendering as well as of the common approaches for a GPU implementation is given, followed by detailed description of our implementation with WebGL. Afterwards the implementation of advanced features is covered, before a short introduction to X3DOM, as a possible alternative for web based volume visualization, is given. It is the aim of this work to incorporate both basic and advanced methods of volume rendering and to achieve interactive framerates with WebGL, using the power of client-side graphics hardware. With regard to that, the result of our implementation is discussed by evaluating its performance and by comparing it to an alternative solution. Finally, we draw a conclusion of our work and point out possible future work and improvements

    Volume Ray casting with peak finding and differential sampling

    Get PDF
    Journal ArticleDirect volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions

    Interactive feature detection in volumetric data

    Full text link
    Im Rahmen dieser Dissertation wurden drei Techniken für die interaktive Merkmalsdetektion in Volumendaten entwickelt. Das erste Verfahren auf Basis des LH-Transferfunktionsraumes ermöglicht es dem Benutzer, Objekt-Oberflächen in einem Volumendatensatz durch direktes Markieren im gerenderten Bild zu identifizieren, wobei keine Interaktion im Datenraum des Volumens benötigt wird. Zweitens wird ein formbasiertes Klassifikationsverfahren vorgestellt, das ausgehend von einer groben Vorsegmentierung den Volumendatensatz in eine Menge von kleineren Regionen zerlegt, deren Form anschließend mit eigens entwickelten Klassifikatoren bestimmt wird. Drittens wird ein interaktives Volumen-Segmentierungsverfahren auf Basis des Random Walker-Algorithmus beschrieben, das speziell auf die Verringerung von Fehlklassifizierungen in der resultierenden Segmentierung abzielt. This dissertation presents three volumetric feature detection approaches that focus on an efficient interplay between user and system. The first technique exploits the LH transfer function space in order to enable the user to classify boundaries by directly marking them in the volume rendering image, without requiring interaction in the data domain. Second, we propose a shape-based feature detection approach that blurs the border between fast but limited classification and powerful but laborious segmentation techniques. Third, we present a guided probabilistic volume segmentation workflow that focuses on the minimization of uncertainty in the resulting segmentation. In an iterative process, the system continuously assesses uncertainty of an intermediate random walker-based segmentation in order to detect regions with high ambiguity, to which the user’s attention is directed to support the correction of potential segmentation errors

    A Survey of GPU-Based Large-Scale Volume Visualization

    Get PDF
    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera-, and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e., “output-sensitive” algorithms and system designs. This leads to recent outputsensitive approaches that are “ray-guided,” “visualization-driven,” or “display-aware.” In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we discuss in this survey.Engineering and Applied Science
    corecore