36 research outputs found

    A Framework for Dynamic Terrain with Application in Off-road Ground Vehicle Simulations

    Get PDF
    The dissertation develops a framework for the visualization of dynamic terrains for use in interactive real-time 3D systems. Terrain visualization techniques may be classified as either static or dynamic. Static terrain solutions simulate rigid surface types exclusively; whereas dynamic solutions can also represent non-rigid surfaces. Systems that employ a static terrain approach lack realism due to their rigid nature. Disregarding the accurate representation of terrain surface interaction is rationalized because of the inherent difficulties associated with providing runtime dynamism. Nonetheless, dynamic terrain systems are a more correct solution because they allow the terrain database to be modified at run-time for the purpose of deforming the surface. Many established techniques in terrain visualization rely on invalid assumptions and weak computational models that hinder the use of dynamic terrain. Moreover, many existing techniques do not exploit the capabilities offered by current computer hardware. In this research, we present a component framework for terrain visualization that is useful in research, entertainment, and simulation systems. In addition, we present a novel method for deforming the terrain that can be used in real-time, interactive systems. The development of a component framework unifies disparate works under a single architecture. The high-level nature of the framework makes it flexible and adaptable for developing a variety of systems, independent of the static or dynamic nature of the solution. Currently, there are only a handful of documented deformation techniques and, in particular, none make explicit use of graphics hardware. The approach developed by this research offloads extra work to the graphics processing unit; in an effort to alleviate the overhead associated with deforming the terrain. Off-road ground vehicle simulation is used as an application domain to demonstrate the practical nature of the framework and the deformation technique. In order to realistically simulate terrain surface interactivity with the vehicle, the solution balances visual fidelity and speed. Accurately depicting terrain surface interactivity in off-road ground vehicle simulations improves visual realism; thereby, increasing the significance and worth of the application. Systems in academia, government, and commercial institutes can make use of the research findings to achieve the real-time display of interactive terrain surfaces

    Interactive 3D video editing

    Get PDF
    We present a generic and versatile framework for interactive editing of 3D video footage. Our framework combines the advantages of conventional 2D video editing with the power of more advanced, depth-enhanced 3D video streams. Our editor takes 3D video as input and writes both 2D or 3D video formats as output. Its underlying core data structure is a novel 4D spatio-temporal representation which we call the video hypervolume. Conceptually, the processing loop comprises three fundamental operators: slicing, selection, and editing. The slicing operator allows users to visualize arbitrary hyperslices from the 4D data set. The selection operator labels subsets of the footage for spatio-temporal editing. This operator includes a 4D graph-cut based algorithm for object selection. The actual editing operators include cut & paste, affine transformations, and compositing with other media, such as images and 2D video. For high-quality rendering, we employ EWA splatting with view-dependent texturing and boundary matting. We demonstrate the applicability of our methods to post-production of 3D vide

    Particle Systems for Efficient and Accurate High-Order Finite Element Visualization

    Full text link

    Surface modeling and rendering with line segments

    Get PDF
    Master'sMASTER OF SCIENC

    Neural Radiance Fields: Past, Present, and Future

    Full text link
    The various aspects like modeling and interpreting 3D environments and surroundings have enticed humans to progress their research in 3D Computer Vision, Computer Graphics, and Machine Learning. An attempt made by Mildenhall et al in their paper about NeRFs (Neural Radiance Fields) led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published. This paper serves as a bridge for people starting to study these fields by building on the basics of Mathematics, Geometry, Computer Vision, and Computer Graphics to the difficulties encountered in Implicit Representations at the intersection of all these disciplines. This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world. In doing so, this survey categorizes all the NeRF-related research in terms of the datasets used, objective functions, applications solved, and evaluation criteria for these applications.Comment: 413 pages, 9 figures, 277 citation

    Interactive volume ray tracing

    Get PDF
    Die Visualisierung von volumetrischen Daten ist eine der interessantesten, aber sicherlich auch schwierigsten Anwendungsgebiete innerhalb der wissenschaftlichen Visualisierung. Im Gegensatz zu Oberflächenmodellen, repräsentieren solche Daten ein semi-transparentes Medium in einem 3D-Feld. Anwendungen reichen von medizinischen Untersuchungen, Simulation physikalischer Prozesse bis hin zur visuellen Kunst. Viele dieser Anwendungen verlangen Interaktivität hinsichtlich Darstellungs- und Visualisierungsparameter. Der Ray-Tracing- (Stahlverfolgungs-) Algorithmus wurde dabei, obwohl er inhärent die Interaktion mit einem solchen Medium simulieren kann, immer als zu langsam angesehen. Die meisten Forscher konzentrierten sich vielmehr auf Rasterisierungsansätze, da diese besser für Grafikkarten geeignet sind. Dabei leiden diese Ansätze entweder unter einer ungenügenden Qualität respektive Flexibilität. Die andere Alternative besteht darin, den Ray-Tracing-Algorithmus so zu beschleunigen, dass er sinnvoll für Visualisierungsanwendungen benutzt werden kann. Seit der Verfügbarkeit moderner Grafikkarten hat die Forschung auf diesem Gebiet nachgelassen, obwohl selbst moderne GPUs immer noch Limitierungen, wie beispielsweise der begrenzte Grafikkartenspeicher oder das umständliche Programmiermodell, enthalten. Die beiden in dieser Arbeit vorgestellten Methoden sind deshalb vollständig softwarebasiert, da es sinnvoller erscheint, möglichst viele Optimierungen in Software zu realisieren, bevor eine Portierung auf Hardware erfolgt. Die erste Methode wird impliziter Kd-Baum genannt, eine hierarchische und räumliche Beschleunigungstruktur, die ursprünglich für die Generierung von Isoflächen reguläre Gitterdatensätze entwickelt wurde. In der Zwischenzeit unterstützt sie auch die semi-transparente Darstellung, die Darstellung von zeitabhängigen Datensätzen und wurde erfolgreich für andere Anwendungen eingesetzt. Der zweite Algorithmus benutzt so genannte Plücker-Koordinaten, welche die Implementierung eines schnellen inkrementellen Traversierers für Datensätze erlauben, deren Primitive Tetraeder beziehungsweise Hexaeder sind. Beide Algorithmen wurden wesentlich optimiert, um eine interaktive Bildgenerierung volumetrischer Daten zu ermöglichen und stellen deshalb einen wichtigen Beitrag hin zu einem flexiblen und interaktiven Volumen-Ray-Tracing-System dar.Volume rendering is one of the most demanding and interesting topics among scientific visualization. Applications include medical examinations, simulation of physical processes, and visual art. Most of these applications demand interactivity with respect to the viewing and visualization parameters. The ray tracing algorithm, although inherently simulating light interaction with participating media, was always considered too slow. Instead, most researchers followed object-order algorithms better suited for graphics adapters, although such approaches often suffer either from low quality or lack of flexibility. Another alternative is to speed up the ray tracing algorithm to make it competitive for volumetric visualization tasks. Since the advent of modern graphic adapters, research in this area had somehow ceased, although some limitations of GPUs, e.g. limited graphics board memory and tedious programming model, are still a problem. The two methods discussed in this thesis are therefore purely software-based since it is believed that software implementations allow for a far better optimization process before porting algorithms to hardware. The first method is called implicit kd-tree, which is a hierarchical spatial acceleration structure originally developed for iso-surface rendering of regular data sets that now supports semi-transparent rendering, time-dependent data visualization, and is even used in non volume-rendering applications. The second algorithm uses so-called Plücker coordinates, providing a fast incremental traversal for data sets consisting of tetrahedral or hexahedral primitives. Both algorithms are highly optimized to support interactive rendering of volumetric data sets and are therefore major contributions towards a flexible and interactive volume ray tracing framework

    Incremental volume rendering using hierarchical compression

    Get PDF
    Includes bibliographical references.The research has been based on the thesis that efficient volume rendering of datasets, contained on the Internet, can be achieved on average personal workstations. We present a new algorithm here for efficient incremental rendering of volumetric datasets. The primary goal of this algorithm is to give average workstations the ability to efficiently render volume data received over relatively low bandwidth network links in such a way that rapid user feedback is maintained. Common limitations of workstation rendering of volume data include: large memory overheads, the requirement of expensive rendering hardware, and high speed processing ability. The rendering algorithm presented here overcomes these problems by making use of the efficient Shear-Warp Factorisation method which does not require specialised graphics hardware. However the original Shear-Warp algorithm suffers from a high memory overhead and does not provide for incremental rendering which is required should rapid user feedback be maintained. Our algorithm represents the volumetric data using a hierarchical data structure which provides for the incremental classification and rendering of volume data. This exploits the multiscale nature of the octree data structure. The algorithm reduces the memory footprint of the original Shear-Warp Factorisation algorithm by a factor of more than two, while maintaining good rendering performance. These factors make our octree algorithm more suitable for implementation on average desktop workstations for the purposes of interactive exploration of volume models over a network. This dissertation covers the theory and practice of developing the octree based Shear-Warp algorithms, and then presents the results of extensive empirical testing. The results, using typical volume datasets, demonstrate the ability of the algorithm to achieve high rendering rates for both incremental rendering and standard rendering while reducing the runtime memory requirements

    Interactive global illumination on the CPU

    Get PDF
    Computing realistic physically-based global illumination in real-time remains one of the major goals in the fields of rendering and visualisation; one that has not yet been achieved due to its inherent computational complexity. This thesis focuses on CPU-based interactive global illumination approaches with an aim to develop generalisable hardware-agnostic algorithms. Interactive ray tracing is reliant on spatial and cache coherency to achieve interactive rates which conflicts with needs of global illumination solutions which require a large number of incoherent secondary rays to be computed. Methods that reduce the total number of rays that need to be processed, such as Selective rendering, were investigated to determine how best they can be utilised. The impact that selective rendering has on interactive ray tracing was analysed and quantified and two novel global illumination algorithms were developed, with the structured methodology used presented as a framework. Adaptive Inter- leaved Sampling, is a generalisable approach that combines interleaved sampling with an adaptive approach, which uses efficient component-specific adaptive guidance methods to drive the computation. Results of up to 11 frames per second were demonstrated for multiple components including participating media. Temporal Instant Caching, is a caching scheme for accelerating the computation of diffuse interreflections to interactive rates. This approach achieved frame rates exceeding 9 frames per second for the majority of scenes. Validation of the results for both approaches showed little perceptual difference when comparing against a gold-standard path-traced image. Further research into caching led to the development of a new wait-free data access control mechanism for sharing the irradiance cache among multiple rendering threads on a shared memory parallel system. By not serialising accesses to the shared data structure the irradiance values were shared among all the threads without any overhead or contention, when reading and writing simultaneously. This new approach achieved efficiencies between 77% and 92% for 8 threads when calculating static images and animations. This work demonstrates that, due to the flexibility of the CPU, CPU-based algorithms remain a valid and competitive choice for achieving global illumination interactively, and an alternative to the generally brute-force GPU-centric algorithms

    Volumetric particle modeling

    Get PDF
    This dissertation presents a robust method of modeling objects and forces for computer animation. Within this method objects and forces are represented as particles. As in most modeling systems, the movement of objects is driven by physically based forces. The usage of particles, however, allows more artistically motivated behavior to be achieved and also allows the modeling of heterogeneous objects and objects in different state phases: solid, liquid or gas. By using invisible particles to propagate forces through the modeling environment complex behavior is achieved through the interaction of relatively simple components. In sum, 'macroscopic' behavior emerges from 'microscopic' modeling. We present a newly developed modeling framework expanding on related work. This framework allows objects and forces to be modeled using particle representations and provides the details on how objects are created, how they interact, and how they may be displayed. We present examples to demonstrate the viability and robustness of the developed method of modeling. They illustrate the breaking and fracturing of solids, the interaction of objects in different phase states, and the achievement of a reasonable balance between artistic and physically based behaviors

    Flexible Attenuation Fields: Tomographic Reconstruction From Heterogeneous Datasets

    Get PDF
    Traditional reconstruction methods for X-ray computed tomography (CT) are highly constrained in the variety of input datasets they admit. Many of the imaging settings -- the incident energy, field-of-view, effective resolution -- remain fixed across projection images, and the only real variance is in the detector\u27s position and orientation with respect to the scene. In contrast, methods for 3D reconstruction of natural scenes are extremely flexible to the geometric and photometric properties of the input datasets, readily accepting and benefiting from images captured under varying lighting conditions, with different cameras, and at disparate points in time and space. Extending CT to support similar degrees of flexibility would significantly enhance what can be learned from tomographic datasets. We propose that traditionally complicated or time-consuming tomographic tasks, such as multi-resolution and multi-energy analysis, can be more readily achieved with a reconstruction framework which explicitly accepts datasets with varied imaging settings. This work presents a CT reconstruction framework specifically designed for datasets with heterogeneous capture properties which we call Flexible Attenuation Fields (FlexAF). Built on differentiable ray tracing and continuous neural volumes, FlexAF accepts X-ray images captured from any position and orientation in the world coordinate frame, including images which differ in size, resolution, field-of-view, and photometric settings. This method produces reconstructions for regular CT scans which are comparable to those produced by filtered backprojection, demonstrating that additional flexibility does not fundamentally hinder the ability to reconstruct high-quality volumes. Our experiments test the expanded capabilities of FlexAF for addressing challenging reconstruction tasks, including automatic camera calibration and reconstruction of multi-resolution and multi-energy volumes
    corecore