194 research outputs found
Deformable Neural Radiance Fields using RGB and Event Cameras
Modeling Neural Radiance Fields for fast-moving deformable objects from
visual data alone is a challenging problem. A major issue arises due to the
high deformation and low acquisition rates. To address this problem, we propose
to use event cameras that offer very fast acquisition of visual change in an
asynchronous manner. In this work, we develop a novel method to model the
deformable neural radiance fields using RGB and event cameras. The proposed
method uses the asynchronous stream of events and calibrated sparse RGB frames.
In our setup, the camera pose at the individual events required to integrate
them into the radiance fields remains unknown. Our method jointly optimizes
these poses and the radiance field. This happens efficiently by leveraging the
collection of events at once and actively sampling the events during learning.
Experiments conducted on both realistically rendered graphics and real-world
datasets demonstrate a significant benefit of the proposed method over the
state-of-the-art and the compared baseline.
This shows a promising direction for modeling deformable neural radiance
fields in real-world dynamic scenes
Real-time Physics Based Simulation for 3D Computer Graphics
Restoration of realistic animation is a critical part in the area of computer graphics. The goal of this sort of simulation is to imitate the behavior of the transformation in real life to the greatest extent. Physics-based simulation provides a solid background and proficient theories that can be applied in the simulation. In this dissertation, I will present real-time simulations which are physics-based in the area of terrain deformation and ship oscillations.
When ground vehicles navigate on soft terrains such as sand, snow and mud, they often leave distinctive tracks. The realistic simulation of such vehicle-terrain interaction is important for ground based visual simulations and many video games. However, the existing research in terrain deformation has not addressed this issue effectively. In this dissertation, I present a new terrain deformation algorithm for simulating vehicle-terrain interaction in real time. The algorithm is based on the classic terramechanics theories, and calculates terrain deformation according to the vehicle load, velocity, tire size, and soil concentration. As a result, this algorithm can simulate different vehicle tracks on different types of terrains with different vehicle properties. I demonstrate my algorithm by vehicle tracks on soft terrain.
In the field of ship oscillation simulation, I propose a new method for simulating ship motions in waves. Although there have been plenty of previous work on physics based fluid-solid simulation, most of these methods are not suitable for real-time applications. In particular, few methods are designed specifically for simulating ship motion in waves. My method is based on physics theories of ship motion, but with necessary simplifications to ensure real-time performance. My results show that this method is well suited to simulate sophisticated ship motions in real time applications
Neural 3D Video Synthesis
We propose a novel approach for 3D video synthesis that is able to represent
multi-view video recordings of a dynamic real-world scene in a compact, yet
expressive representation that enables high-quality view synthesis and motion
interpolation. Our approach takes the high quality and compactness of static
neural radiance fields in a new direction: to a model-free, dynamic setting. At
the core of our approach is a novel time-conditioned neural radiance fields
that represents scene dynamics using a set of compact latent codes. To exploit
the fact that changes between adjacent frames of a video are typically small
and locally consistent, we propose two novel strategies for efficient training
of our neural network: 1) An efficient hierarchical training scheme, and 2) an
importance sampling strategy that selects the next rays for training based on
the temporal variation of the input videos. In combination, these two
strategies significantly boost the training speed, lead to fast convergence of
the training process, and enable high quality results. Our learned
representation is highly compact and able to represent a 10 second 30 FPS
multi-view video recording by 18 cameras with a model size of just 28MB. We
demonstrate that our method can render high-fidelity wide-angle novel views at
over 1K resolution, even for highly complex and dynamic scenes. We perform an
extensive qualitative and quantitative evaluation that shows that our approach
outperforms the current state of the art. We include additional video and
information at: https://neural-3d-video.github.io/Comment: Project website: https://neural-3d-video.github.io
Enaction and Visual Arts : Towards Dynamic Instrumental Visual Arts
International audienceThis paper is a theoretical paper that presents how the concept of Enaction, centerd on action and interaction paradigm, coupled with the new properties of the contemporary computer tools is able to provoke deep changes in arts. It examines how this concept accompanies the historical trends in Musical, Visual and Choreographic Arts. It enumerates the new correlated fundamental questions, scientific as well as artistic, the author identifies. After that, it focuses on Dynamic Visual Arts, trying to elicit the revolution brought by these deep conceptual and technological changes. It assumes that the contemporary conditions shift the art of visual motion from a ''Kinema'' to a ''Dyname'', allowing artists ''to play images'' as ''to play violin'', and that this shift could not appear before our era. It illustrates these new historical possibilities by some examples developed by the scientific and artistic works of the author and her co- workers. In conclusion, it assumes that this shift could open the door to a new genuine connection between arts that believed to cooperate but that remained separated during ages: music, dance and animation. This possible new ALLIANCE could lead the society to consider a new type of arts, we want to call ''Dynamic Instrumental Arts'', which will be really multisensorial: simultaneously Musical, Gestural and Visual
Physical simulation of wood combustion by using particle system
Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2010.Thesis (Master's) -- Bilkent University, 2010.Includes bibliographical references leaves 50-54.In computer graphics, the most challenging problem is modeling natural phenomena
such as water, re, smoke etc. The reason behind this challenge is the
structural complexity, as the simulation of natural phenomena depends on some
physical equations that are di cult to implement and model. In complex physically
based simulations, it is required to keep track of several properties of the
object that participates in the simulation. These properties can change and their
alteration may a ect other physical and thermal properties of object. As one
of natural phenomena, burning wood has various properties such as combustion
reaction, heat transfer, heat distribution, fuel consumption and object shape in
which change in one during the duration of simulation alters the e ects of some
other properties.
There have been several models for animating and modeling re phenomena.
The problem with most of the existing studies related to re modeling is that
decomposition of the burning solid is not mentioned, instead solids are treated
only as fuel source.
In this thesis, we represent a physically based simulation of a particle based
method for decomposition of burning wood and combustion process. In our work,
besides being a fuel source, physical and thermal a ects of combustion process
over wood has been observed. A particle based system has been modelled in
order to simulate the decomposition of a wood object depending on internal and
external properties and their interactions and the motion of the spreading re
according to combustion process.Gürcüoğlu, GizemM.S
Real-time hybrid cutting with dynamic fluid visualization for virtual surgery
It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery
Interactive simulation of fire, burn and decomposition
This work presents an approach to effectively integrate into one unified modular
fire simulation framework the major processes related to fire, namely: a burning
process, chemical combustion, heat distribution, decomposition and deformation of
burning solids, and rigid body simulation of the residue. Simulators for every stage
are described, and the modular structure enables switching to different simulators if
more accuracy or more interactivity is desired. A “Stable Fluids” based three gas
system is used to model the combustion process, and the heat generated during the
combustion is used to drive the flow of the hot air. Objects, if exposed to enough
heat, ignite and start burning. The decomposition of the burning object is modeled as
a level set method, driven by the pyrolysis process, where the burning object releases
combustible gases. Secondary deformation effects, such as bending burning matches
and crumpling burning paper, are modeled as a proxy based deformation.
Physically based simulation, done at interactive rates, enables the user to ef-
ficiently test different setups, as well as interact and change the conditions during
the simulation. The graphics card is used to generate additional frames for real-time
visualization.
This work further proposes a method for controlling and directing high resolution
simulations. An interactive coarse resolution simulation is provided to the user as a “preview” to control and achieve the desired simulation behavior. A higher resolution
“final” simulation that creates all the fine scale behavior is matched to the preview
simulation such that the preview and final simulations behave in a similar manner.
In this dissertation, we highlighted a gap within the CG community for the
simulation of fire. There has not previously been a physically based yet interactive
simulation for fire. This dissertation describes a unified simulation framework for
physically based simulation of fire and burning. Our results show that our implementation
can model fire, objects catching fire, burning objects, decomposition of
burning objects, and additional secondary deformations. The results are plausible
even at interactive frame rates, and controllable
Recommended from our members
Real-time Rendering of Burning Objects in Video Games
In recent years there has been growing interest in limitless realism in computer graphics applications. Among those, my foremost concentration falls into the complex physical simulations and modeling with diverse applications for the gaming industry. Different simulations have been virtually successful by replicating the details of physical process. As a result, some were strong enough to lure the user into believable virtual worlds that could destroy any sense of attendance. In this research, I focus on fire simulations and its deformation process towards various virtual objects. In most game engines model loading takes place at the beginning of the game or when the game is transitioning between levels. Game models are stored in large data structures. Since changing or adjusting a large data structure while the game is proceeding may adversely affect the performance of the game. Therefore, developers may choose to avoid procedural simulations to save resources and avoid interruptions on performance. I introduce a process to implement a real-time model deformation while maintaining performance. It is a challenging task to achieve high quality simulation while utilizing minimum resources to represent multiple events in timely manner. Especially in video games, this overwhelming criterion would be robust enough to sustain the engaging player's willing suspension of disbelief. I have implemented and tested my method on a relatively modest GPU using CUDA. My experiments conclude this method gives a believable visual effect while using small fraction of CPU and GPU resources
HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling
Volumetric scene representations enable photorealistic view synthesis for
static scenes and form the basis of several existing 6-DoF video techniques.
However, the volume rendering procedures that drive these representations
necessitate careful trade-offs in terms of quality, rendering speed, and memory
efficiency. In particular, existing methods fail to simultaneously achieve
real-time performance, small memory footprint, and high-quality rendering for
challenging real-world scenes. To address these issues, we present HyperReel --
a novel 6-DoF video representation. The two core components of HyperReel are:
(1) a ray-conditioned sample prediction network that enables high-fidelity,
high frame rate rendering at high resolutions and (2) a compact and
memory-efficient dynamic volume representation. Our 6-DoF video pipeline
achieves the best performance compared to prior and contemporary approaches in
terms of visual quality with small memory requirements, while also rendering at
up to 18 frames-per-second at megapixel resolution without any custom CUDA
code.Comment: Project page: https://hyperreel.github.io
- …