Underwater image restoration has been a challenging problem for decades since
the advent of underwater photography. Most solutions focus on shallow water
scenarios, where the scene is uniformly illuminated by the sunlight. However,
the vast majority of uncharted underwater terrain is located beyond 200 meters
depth where natural light is scarce and artificial illumination is needed. In
such cases, light sources co-moving with the camera, dynamically change the
scene appearance, which make shallow water restoration methods inadequate. In
particular for multi-light source systems (composed of dozens of LEDs
nowadays), calibrating each light is time-consuming, error-prone and tedious,
and we observe that only the integrated illumination within the viewing volume
of the camera is critical, rather than the individual light sources. The key
idea of this paper is therefore to exploit the appearance changes of objects or
the seafloor, when traversing the viewing frustum of the camera. Through new
constraints assuming Lambertian surfaces, corresponding image pixels constrain
the light field in front of the camera, and for each voxel a signal factor and
a backscatter value are stored in a volumetric grid that can be used for very
efficient image restoration of camera-light platforms, which facilitates
consistently texturing large 3D models and maps that would otherwise be
dominated by lighting and medium artifacts. To validate the effectiveness of
our approach, we conducted extensive experiments on simulated and real-world
datasets. The results of these experiments demonstrate the robustness of our
approach in restoring the true albedo of objects, while mitigating the
influence of lighting and medium effects. Furthermore, we demonstrate our
approach can be readily extended to other scenarios, including in-air imaging
with artificial illumination or other similar cases