In this paper, we present an embarrassingly simple yet effective solution to
a seemingly impossible mission, low-light image enhancement (LLIE) without
access to any task-related data. The proposed solution, Noise SElf-Regression
(NoiSER), simply learns a convolutional neural network equipped with a
instance-normalization layer by taking a random noise image,
N(0,σ2) for each pixel, as both input and output for each
training pair, and then the low-light image is fed to the learned network for
predicting the normal-light image. Technically, an intuitive explanation for
its effectiveness is as follows: 1) the self-regression reconstructs the
contrast between adjacent pixels of the input image, 2) the
instance-normalization layers may naturally remediate the overall
magnitude/lighting of the input image, and 3) the N(0,σ2)
assumption for each pixel enforces the output image to follow the well-known
gray-world hypothesis \cite{Gary-world_Hypothesis} when the image size is big
enough, namely, the averages of three RGB components of an image converge to
the same value. Compared to existing SOTA LLIE methods with access to different
task-related data, NoiSER is surprisingly highly competitive in enhancement
quality, yet with a much smaller model size, and much lower training and
inference cost. With only ∼ 1K parameters, NoiSER realizes about 1 minute
for training and 1.2 ms for inference with 600x400 resolution on RTX 2080 Ti.
As a bonus, NoiSER possesses automated over-exposure suppression ability and
shows excellent performance on over-exposed photos