4 research outputs found
RGB-D Mapping and Tracking in a Plenoxel Radiance Field
Building on the success of Neural Radiance Fields (NeRFs), recent years have
seen significant advances in the domain of novel view synthesis. These models
capture the scene's volumetric radiance field, creating highly convincing dense
photorealistic models through the use of simple, differentiable rendering
equations. Despite their popularity, these algorithms suffer from severe
ambiguities in visual data inherent to the RGB sensor, which means that
although images generated with view synthesis can visually appear very
believable, the underlying 3D model will often be wrong. This considerably
limits the usefulness of these models in practical applications like Robotics
and Extended Reality (XR), where an accurate dense 3D reconstruction otherwise
would be of significant value. In this technical report, we present the vital
differences between view synthesis models and 3D reconstruction models. We also
comment on why a depth sensor is essential for modeling accurate geometry in
general outward-facing scenes using the current paradigm of novel view
synthesis methods. Focusing on the structure-from-motion task, we practically
demonstrate this need by extending the Plenoxel radiance field model:
Presenting an analytical differential approach for dense mapping and tracking
with radiance fields based on RGB-D data without a neural network. Our method
achieves state-of-the-art results in both the mapping and tracking tasks while
also being faster than competing neural network-based approaches.Comment: *The two authors contributed equally to this pape
Removing Adverse Volumetric Effects From Trained Neural Radiance Fields
While the use of neural radiance fields (NeRFs) in different challenging
settings has been explored, only very recently have there been any
contributions that focus on the use of NeRF in foggy environments. We argue
that the traditional NeRF models are able to replicate scenes filled with fog
and propose a method to remove the fog when synthesizing novel views. By
calculating the global contrast of a scene, we can estimate a density threshold
that, when applied, removes all visible fog. This makes it possible to use NeRF
as a way of rendering clear views of objects of interest located in fog-filled
environments. Additionally, to benchmark performance on such scenes, we
introduce a new dataset that expands some of the original synthetic NeRF scenes
through the addition of fog and natural environments. The code, dataset, and
video results can be found on our project page: https://vegardskui.com/fognerf/Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl