205 research outputs found

    Volumetric Cloud Rendering: An Animation of Clouds

    Get PDF
    This paper demonstrates a production workflow for a volumetric-rendering-based short animation about clouds. The animation is based on the concept of a giant fish swimming in the sky from Zhuangzi\u27s philosophical story. The algorithm and implementation for the modeling and rendering of clouds are also presented. A renderer was developed that uses the OpenVDB library for data storage, fast retrieving and grid manipulation. A user-friendly pipeline was also developed for cloud modeling and rendering, which used Python and XML for adjusting rendering parameters. The pipeline includes Maya to build the rough cloud model and Houdini to calculate the interior light points. Final compositing was done in Nuke. Several MEL and Python scripts were also used to retrieve camera and light information from Maya and Houdini, thereby facilitating the production process

    Towards real-time simulation of the sidescan sonar imaging process

    Get PDF
    peer-reviewedThis paper describes the functional theory and design of a modular simulator developed to generate physically representative spatio-temporal sidescan sonar echo data from a fractal model of the seafloor topography. The main contribution of this paper is in significantly reducing the computational bottleneck inherent in existing simulation models due to the size and resolution of the complex seafloor models required for acoustic reverberation modelling. Discovery of the individual faces within the footprint of the acoustic beam at each ping is considerably accelerated by successfully adapting and integrating an optimised mesh refinement scheme intended for interactive rendering of large-scale complex surfaces described by polygonal meshes. Operational features of the simulator permit direct visualisation of the sonar image formed from successive echo lines and synthetic images generated during simulation are presented.PUBLISHEDpeer-reviewe

    Real-time lattice boltzmann shallow waters method for breaking wave simulations

    Get PDF
    We present a new approach for the simulation of surfacebased fluids based in a hybrid formulation of Lattice Boltzmann Method for Shallow Waters and particle systems. The modified LBM can handle arbitrary underlying terrain conditions and arbitrary fluid depth. It also introduces a novel method for tracking dry-wet regions and moving boundaries. Dynamic rigid bodies are also included in our simulations using a two-way coupling. Certain features of the simulation that the LBM can not handle because of its heightfield nature, as breaking waves, are detected and automatically turned into splash particles. Here we use a ballistic particle system, but our hybrid method can handle more complex systems as SPH. Both the LBM and particle systems are implemented in CUDA, although dynamic rigid bodies are simulated in CPU. We show the effectiveness of our method with various examples which achieve real-time on consumer-level hardware.Peer ReviewedPostprint (author's final draft

    Interactive Visual Analytics for Large-scale Particle Simulations

    Get PDF
    Particle based model simulations are widely used in scientific visualization. In cosmology, particles are used to simulate the evolution of dark matter in the universe. Clusters of particles (that have special statistical properties) are called halos. From a visualization point of view, halos are clusters of particles, each having a position, mass and velocity in three dimensional space, and they can be represented as point clouds that contain various structures of geometric interest such as filaments, membranes, satellite of points, clusters, and cluster of clusters. The thesis investigates methods for interacting with large scale data-sets represented as point clouds. The work mostly aims at the interactive visualization of cosmological simulation based on large particle systems. The study consists of three components: a) two human factors experiments into the perceptual factors that make it possible to see features in point clouds; b) the design and implementation of a user interface making it possible to rapidly navigate through and visualize features in the point cloud, c) software development and integration to support visualization

    Application performation evaluation of the HTMT architecture.

    Full text link

    Advanced Underwater Image Restoration in Complex Illumination Conditions

    Full text link
    Underwater image restoration has been a challenging problem for decades since the advent of underwater photography. Most solutions focus on shallow water scenarios, where the scene is uniformly illuminated by the sunlight. However, the vast majority of uncharted underwater terrain is located beyond 200 meters depth where natural light is scarce and artificial illumination is needed. In such cases, light sources co-moving with the camera, dynamically change the scene appearance, which make shallow water restoration methods inadequate. In particular for multi-light source systems (composed of dozens of LEDs nowadays), calibrating each light is time-consuming, error-prone and tedious, and we observe that only the integrated illumination within the viewing volume of the camera is critical, rather than the individual light sources. The key idea of this paper is therefore to exploit the appearance changes of objects or the seafloor, when traversing the viewing frustum of the camera. Through new constraints assuming Lambertian surfaces, corresponding image pixels constrain the light field in front of the camera, and for each voxel a signal factor and a backscatter value are stored in a volumetric grid that can be used for very efficient image restoration of camera-light platforms, which facilitates consistently texturing large 3D models and maps that would otherwise be dominated by lighting and medium artifacts. To validate the effectiveness of our approach, we conducted extensive experiments on simulated and real-world datasets. The results of these experiments demonstrate the robustness of our approach in restoring the true albedo of objects, while mitigating the influence of lighting and medium effects. Furthermore, we demonstrate our approach can be readily extended to other scenarios, including in-air imaging with artificial illumination or other similar cases
    corecore