5 research outputs found

    Removing Adverse Volumetric Effects From Trained Neural Radiance Fields

    Full text link
    While the use of neural radiance fields (NeRFs) in different challenging settings has been explored, only very recently have there been any contributions that focus on the use of NeRF in foggy environments. We argue that the traditional NeRF models are able to replicate scenes filled with fog and propose a method to remove the fog when synthesizing novel views. By calculating the global contrast of a scene, we can estimate a density threshold that, when applied, removes all visible fog. This makes it possible to use NeRF as a way of rendering clear views of objects of interest located in fog-filled environments. Additionally, to benchmark performance on such scenes, we introduce a new dataset that expands some of the original synthetic NeRF scenes through the addition of fog and natural environments. The code, dataset, and video results can be found on our project page: https://vegardskui.com/fognerf/Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Boundary artifact analysis in diffusion modelling with stochastic differential equations

    No full text
    In many research areas, it is common to model advection-diffusion problems with Lagrangian particle methods. This is the same as solving a stochastic differential equation, with drift and diffusion coefficients derived from the advection-diffusion equation. But there is also a necessary condition for the particle method to be equivalent to the Eulerian advection-diffusion equation, is that it satisfies the well-mixed condition (Thomson, 1987), which says that if particles are well mixed, they have to stay well mixed later on. This is just a statement with respect to second law of thermodynamics, which is entropy. A commonly used implementation of reflecting boundary conditions for particle methods is analysed. We find that in some cases, this reflecting scheme will give rise to oscillations in concentration close to the boundary, which we call the boundary artifact. We analyse the reflection scheme in the Lagrangian model, and compare it to Neumann boundary conditions in the Eulerian model. We find that if the diffusivity has a non-zero derivative at the boundary, this violates one of the conditions for equivalence with the advection-diffusion equation, which is that the drift coefficient in the SDE must be Lipschitz continuous. This seems to be the origin of the boundary artifact. We analyse the artifact further, and describe two different types of boundary artifact. We suggest different approaches to dealing with the problem, and find that the problem can in practice be handled by adjusting the diffusivity close to the boundary. Support and motivation for such a change is found in the concept of the ”unresolved basal layer” (Wilson & Flesch, 1993), which is a pragmatic idea stating that closer than some distance from the boundary, we simply cannot know the details of the turbulent motion

    Data capture and real-time data quality analysis

    Get PDF
    This report presents results obtained in the CageReporter project regarding the development of a 3D vision system to be used for data capture in fish cages. The developed system enables to obtain high-quality data with the overall goal to identify fish conditions and perform cage inspections during daily operations, as well as the robotic vision for an underwater vehicle during the adaptive operation planning in the cage. A compact and robust sensor with optical components and lighting system was developed. In addition, this activity presents development of methods to evaluate the quality of the captured data. Based on defined quality criteria associated with fish conditions and cage inspection operations, algorithms have been developed to evaluate whether the quality criteria are met. The algorithms have been validated using image data obtained from 24/7 video streams from a full-scale fish cage. The work furthermore includes the development of image processing algorithms to estimate the distance and orientation relative to the inspected object of interest, such as the fish or the net. The developed algorithms have been validated based on vision data obtained during tests both in lab- and full scale.publishedVersio

    Data capture and real-time data quality analysis

    No full text
    This report presents results obtained in the CageReporter project regarding the development of a 3D vision system to be used for data capture in fish cages. The developed system enables to obtain high-quality data with the overall goal to identify fish conditions and perform cage inspections during daily operations, as well as the robotic vision for an underwater vehicle during the adaptive operation planning in the cage. A compact and robust sensor with optical components and lighting system was developed. In addition, this activity presents development of methods to evaluate the quality of the captured data. Based on defined quality criteria associated with fish conditions and cage inspection operations, algorithms have been developed to evaluate whether the quality criteria are met. The algorithms have been validated using image data obtained from 24/7 video streams from a full-scale fish cage. The work furthermore includes the development of image processing algorithms to estimate the distance and orientation relative to the inspected object of interest, such as the fish or the net. The developed algorithms have been validated based on vision data obtained during tests both in lab- and full scale

    The VAROS Synthetic Underwater Data Set: Towards realistic multi-sensor underwater data with ground truth

    No full text
    Underwater visual perception requires being able to deal with bad and rapidly varying illumination and with reduced visibility due to water turbidity. The verification of such algorithms is crucial for safe and efficient underwater exploration and intervention operations. Ground truth data play an important role in evaluating vision algorithms. However, obtaining ground truth from real underwater environments is in general very hard, if possible at all.In a synthetic underwater 3D environment, however, (nearly) all parameters are known and controllable, and ground truth data can be absolutely accurate in terms of geometry. In this paper, we present the VAROS environment, our approach to generating highly realistic under-water video and auxiliary sensor data with precise ground truth, built around the Blender modeling and rendering environment. VAROS allows for physically realistic motion of the simulated underwater (UW) vehicle including moving illumination. Pose sequences are created by first defining waypoints for the simulated underwater vehicle which are expanded into a smooth vehicle course sampled at IMU data rate (200 Hz). This expansion uses a vehicle dynamics model and a discrete-time controller algorithm that simulates the sequential following of the waypoints.The scenes are rendered using the raytracing method, which generates realistic images, integrating direct light, and indirect volumetric scattering. The VAROS dataset version 1 provides images, inertial measurement unit (IMU) and depth gauge data, as well as ground truth poses, depth images and surface normal images
    corecore