71,089 research outputs found

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process

    Two-dimensional beam tracing from visibility diagrams for real-time acoustic rendering

    Get PDF
    We present an extension of the fast beam-tracing method presented in the work of Antonacci et al. (2008) for the simulation of acoustic propagation in reverberant environments that accounts for diffraction and diffusion. More specifically, we show how visibility maps are suitable for modeling propagation phenomena more complex than specular reflections. We also show how the beam-tree lookup for path tracing can be entirely performed on visibility maps as well. We then contextualize such method to the two different cases of channel (point-to-point) rendering using a headset, and the rendering of a wave field based on arrays of speakers. Finally, we provide some experimental results and comparisons with real data to show the effectiveness and the accuracy of the approach in simulating the soundfield in an environment

    Human Performance Modeling and Rendering via Neural Animated Mesh

    Full text link
    We have recently seen tremendous progress in the neural advances for photo-real human modeling and rendering. However, it's still challenging to integrate them into an existing mesh-based pipeline for downstream applications. In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos. Our core intuition is to bridge the traditional animated mesh workflow with a new class of highly efficient neural techniques. We first introduce a neural surface reconstructor for high-quality surface generation in minutes. It marries the implicit volumetric rendering of the truncated signed distance field (TSDF) with multi-resolution hash encoding. We further propose a hybrid neural tracker to generate animated meshes, which combines explicit non-rigid tracking with implicit dynamic deformation in a self-supervised framework. The former provides the coarse warping back into the canonical space, while the latter implicit one further predicts the displacements using the 4D hash encoding as in our reconstructor. Then, we discuss the rendering schemes using the obtained animated meshes, ranging from dynamic texturing to lumigraph rendering under various bandwidth settings. To strike an intricate balance between quality and bandwidth, we propose a hierarchical solution by first rendering 6 virtual views covering the performer and then conducting occlusion-aware neural texture blending. We demonstrate the efficacy of our approach in a variety of mesh-based applications and photo-realistic free-view experiences on various platforms, i.e., inserting virtual human performances into real environments through mobile AR or immersively watching talent shows with VR headsets.Comment: 18 pages, 17 figure

    Shaping the Future of Animation towards Role of 3D Simulation Technology in Animation Film and Television

    Get PDF
    The application of 3D simulation technology has revolutionized the field of animation film and television art, providing new possibilities and creative opportunities for visual storytelling. This research aims to explore the various aspects of applying 3D simulation technology in animation film and television art. It examines how 3D simulation technology enhances the creation of realistic characters, environments, and special effects, contributing to immersive and captivating storytelling experiences. The research also investigates the technical aspects of integrating 3D cloud simulation technology into the animation production pipeline, including modeling, texturing, rigging, and animation techniques. This paper explores the application of these optimization algorithms in the context of cloud-based 3D environments, focusing on enhancing the efficiency and performance of 3D simulations. Black Widow and Spider Monkey Optimization can be used to optimize the placement and distribution of 3D assets in cloud storage systems, improving data access and retrieval times. The algorithms can also optimize the scheduling of rendering tasks in cloud-based rendering pipelines, leading to more efficient and cost-effective rendering processes. The integration of 3D cloud environments and optimization algorithms enables real-time optimization and adaptation of 3D simulations. This allows for dynamic adjustments of simulation parameters based on changing conditions, resulting in improved accuracy and responsiveness. Moreover, it explores the impact of 3D cloud simulation technology on the artistic process, examining how it influences the artistic vision, aesthetics, and narrative possibilities in animation film and television. The research findings highlight the advantages and challenges of using 3D simulation technology in animation, shedding light on its potential future developments and its role in shaping the future of animation film and television art

    Spatial Sound Rendering – A Survey

    Get PDF
    Simulating propagation of sound and audio rendering can improve the sense of realism and the immersion both in complex acoustic environments and dynamic virtual scenes. In studies of sound auralization, the focus has always been on room acoustics modeling, but most of the same methods are also applicable in the construction of virtual environments such as those developed to facilitate computer gaming, cognitive research, and simulated training scenarios. This paper is a review of state-of-the-art techniques that are based on acoustic principles that apply not only to real rooms but also in 3D virtual environments. The paper also highlights the need to expand the field of immersive sound in a web based browsing environment, because, despite the interest and many benefits, few developments seem to have taken place within this context. Moreover, the paper includes a list of the most effective algorithms used for modelling spatial sound propagation and reports their advantages and disadvantages. Finally, the paper emphasizes in the evaluation of these proposed works

    X3D Earth Terrain-Tile Production Chain for Georeferenced Simulation

    Get PDF
    Web3D '09: Proceedings of the 14th International Conference on 3D Web Technology, June 2009, Pages 159–166.The article of record as published may be found at https://doi.org/10.1145/1559764.1559789Broad needs for digital models of real environments such as 3D terrain or cyber cities are increasing. Many applications related to modeling and simulation require virtual environments constructed from real-world geospatial information in order to guarantee relevance and accuracy in the simulation. The most fundamental data for building virtual environments, terrain elevation and orthogonal imagery, is typically acquired using optical sensors mounted on satellites or airplanes. Providing interoperable and reusable digital models in 3D is important for promoting practical applications of high-resolution airborne imagery. This paper presents research results regarding virtual-environment representations of geospatial information, especially for 3D shape and appearance of virtual terrain. It describes a framework for constructing real-time 3D models of large terrain based on highresolution satellite imagery. This approach is also suitable for underwater bathymetry. The Extensible 3D Graphics (X3D) Geospatial Component standard is applied to produce X3D Earth models with global scope. Efficient rendering, network retrieval and data caching/removal must all be optimized simultaneously, across servers, networks and clients, in order to accomplish these goals properly. Details of this standard-based approach for providing an infrastructure for real-time 3D simulation merging high-resolution geometry and imagery are also presented. This work facilitates open interchange and interoperability across diverse simulation systems and is independently usable by governments, industry, scientists and the general public

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP
    • …
    corecore