Neural Radiance Field (NeRF) is a promising approach for synthesizing novel
views, given a set of images and the corresponding camera poses of a scene.
However, images photographed from a low-light scene can hardly be used to train
a NeRF model to produce high-quality results, due to their low pixel
intensities, heavy noise, and color distortion. Combining existing low-light
image enhancement methods with NeRF methods also does not work well due to the
view inconsistency caused by the individual 2D enhancement process. In this
paper, we propose a novel approach, called Low-Light NeRF (or LLNeRF), to
enhance the scene representation and synthesize normal-light novel views
directly from sRGB low-light images in an unsupervised manner. The core of our
approach is a decomposition of radiance field learning, which allows us to
enhance the illumination, reduce noise and correct the distorted colors jointly
with the NeRF optimization process. Our method is able to produce novel view
images with proper lighting and vivid colors and details, given a collection of
camera-finished low dynamic range (8-bits/channel) images from a low-light
scene. Experiments demonstrate that our method outperforms existing low-light
enhancement methods and NeRF methods.Comment: ICCV 2023. Project website: https://whyy.site/paper/llner