1,146 research outputs found
KiloNeuS: A Versatile Neural Implicit Surface Representation for Real-Time Rendering
NeRF-based techniques fit wide and deep multi-layer perceptrons (MLPs) to a
continuous radiance field that can be rendered from any unseen viewpoint.
However, the lack of surface and normals definition and high rendering times
limit their usage in typical computer graphics applications. Such limitations
have recently been overcome separately, but solving them together remains an
open problem. We present KiloNeuS, a neural representation reconstructing an
implicit surface represented as a signed distance function (SDF) from
multi-view images and enabling real-time rendering by partitioning the space
into thousands of tiny MLPs fast to inference. As we learn the implicit surface
locally using independent models, resulting in a globally coherent geometry is
non-trivial and needs to be addressed during training. We evaluate rendering
performance on a GPU-accelerated ray-caster with in-shader neural network
inference, resulting in an average of 46 FPS at high resolution, proving a
satisfying tradeoff between storage costs and rendering quality. In fact, our
evaluation for rendering quality and surface recovery shows that KiloNeuS
outperforms its single-MLP counterpart. Finally, to exhibit the versatility of
KiloNeuS, we integrate it into an interactive path-tracer taking full advantage
of its surface normals. We consider our work a crucial first step toward
real-time rendering of implicit neural representations under global
illumination.Comment: 9 pages, 8 figure
Playable Environments: {V}ideo Manipulation in Space and Time
We present Playable Environments-a new representation for interactive video generation and manipulation in space and time. With a single image at inference time, our novel framework allows the user to move objects in 3D while generating a video by providing a sequence of desired actions. The actions are learnt in an unsupervised manner. The camera can be controlled to get the desired viewpoint. Our method builds an environment state for each frame, which can be manipulated by our proposed action mod-ule and decoded back to the image space with volumetric rendering. To support diverse appearances of objects, we extend neural radiance fields with style-based modulation. Our method trains on a collection of various monocular videos requiring only the estimated camera parameters and 2D object locations. To set a challenging benchmark, we in-troduce two large scale video datasets with significant cam-era movements. As evidenced by our experiments, playable environments enable several creative applications not at-tainable by prior video synthesis works, including playable 3D video generation, stylization and manipulation 1 1 willi-menapace.github.io/playable-environments-website
View dependent fluid dynamics
This thesis presents a method for simulating fluids on a view dependent grid structure to
exploit level-of-detail with distance to the viewer. Current computer graphics techniques,
such as the Stable Fluid and Particle Level Set methods, are modified to support a nonuniform
simulation grid. In addition, infinite fluid boundary conditions are introduced that
allow fluid to flow freely into or out of the simulation domain to achieve the effect of
large, boundary free bodies of fluid. Finally, a physically based rendering method known
as photon mapping is used in conjunction with ray tracing to generate realistic images of
water with caustics. These methods were implemented as a C++ application framework
capable of simulating and rendering fluid in a variety of user-defined coordinate systems
I M Avatar: Implicit Morphable Head Avatars from Videos
Traditional morphable face models provide fine-grained control over
expression but cannot easily capture geometric and appearance details. Neural
volumetric representations approach photo-realism but are hard to animate and
do not generalize well to unseen expressions. To tackle this problem, we
propose IMavatar (Implicit Morphable avatar), a novel method for learning
implicit head avatars from monocular videos. Inspired by the fine-grained
control mechanisms afforded by conventional 3DMMs, we represent the expression-
and pose-related deformations via learned blendshapes and skinning fields.
These attributes are pose-independent and can be used to morph the canonical
geometry and texture fields given novel expression and pose parameters. We
employ ray tracing and iterative root-finding to locate the canonical surface
intersection for each pixel. A key contribution is our novel analytical
gradient formulation that enables end-to-end training of IMavatars from videos.
We show quantitatively and qualitatively that our method improves geometry and
covers a more complete expression space compared to state-of-the-art methods
Graph-Based Fracture Models for Rigid Body Explosions
Explosions are one of the most powerful and devastating natural phenomena. The pressure front from the blast wave of an explosion can cause fracture of objects in its vicinity and create flying debris. In this thesis, I outline a previously proposed explosion model. An explosion is treated as a fluid with its behaviour governed by the Navier-Stokes equations and the gaseous products modeled using particles. Explosions are simulated as a means for initiating fracture of rigid bodies in the vicinity of an explosion. In contrast to fracture models that are based on physics, I propose a new approach to simulating fracture which treats fracturing the rigid body as a pre-processing step. A rigid body can be pre-fractured by treating it as graph and using one of the two proposed graph partitioning algorithms to divide the object into the desired number of pieces. By treating fracture as a pre-processing step, much less computation need be done during the simulation than models based on physics. It is shown that the recursive breadth-first search graph partitioning algorithm produces physically realistic results for shattering windows that are consistent with observations of real broken windows. The curvature-driven spectral partitioning algorithm fractures objects into two pieces where the object is weakest, where weakest is defined by the area with largest curvature. Numerical simulations of explosions and fracture were conducted to produce data that was used by a ray tracer and volume renderer to create images which were assembled into animations
STORY: A hierarchical animation and storyboarding system for alpha-1
Journal ArticleWe introduce an integrated animation and storyboarding system that simplifies the creation and refinement of computer generated animations. The framework models both the process and product of an animated sequence, making animation more accessible for communication and as an art form. The system adopts a novel approach to animation by integrating storyboards and the traditional film hierarchy in a computer animation system. Traditional animation begins with storyboards representing important moments in a film. These storyboards are structured into shots and scenes which form a standard hierarchy. This hierarchy is important to long animations because it reduces the complexity to manageable proportions. We also introduce the animation proof reader, a tool for identifying awkward camera placement and motion sequences using traditional film production rules
Efficient algorithms for the realistic simulation of fluids
Nowadays there is great demand for realistic simulations in the computer graphics field. Physically-based animations are commonly used, and one of the more complex problems in this field is fluid simulation, more so if real-time applications are the goal. Videogames, in particular, resort to different techniques that, in order to represent fluids, just simulate the consequence and not the cause, using procedural or parametric methods and often discriminating the physical solution.
This need motivates the present thesis, the interactive simulation of free-surface flows, usually liquids, which are the feature of interest in most common applications. Due to the complexity of fluid simulation, in order to achieve real-time framerates, we have resorted to use the high parallelism provided by actual consumer-level GPUs. The simulation algorithm, the
Lattice Boltzmann Method, has been chosen accordingly due to its efficiency and the direct mapping to the hardware architecture because of its local operations.
We have created two free-surface simulations in the GPU: one fully in 3D and another restricted only to the upper surface of a big bulk of fluid, limiting the simulation domain to 2D. We have extended the latter to track dry regions and is also coupled with obstacles in a geometry-independent fashion. As it is restricted to 2D, the simulation loses some features due to the impossibility of simulating vertical separation of the fluid. To account for this we have coupled the surface simulation to a generic particle system with breaking wave conditions; the simulations are totally independent and only the coupling binds the LBM with the chosen particle system.
Furthermore, the visualization of both systems is also done in a realistic way within the interactive framerates; raycasting techniques are used to provide the expected light-related effects as refractions, reflections and caustics. Other techniques that improve the overall detail are also applied as low-level detail ripples and surface foam
- …