Deep deraining networks, while successful in laboratory benchmarks,
consistently encounter substantial generalization issues when deployed in
real-world applications. A prevailing perspective in deep learning encourages
the use of highly complex training data, with the expectation that a richer
image content knowledge will facilitate overcoming the generalization problem.
However, through comprehensive and systematic experimentation, we discovered
that this strategy does not enhance the generalization capability of these
networks. On the contrary, it exacerbates the tendency of networks to overfit
to specific degradations. Our experiments reveal that better generalization in
a deraining network can be achieved by simplifying the complexity of the
training data. This is due to the networks are slacking off during training,
that is, learning the least complex elements in the image content and
degradation to minimize training loss. When the complexity of the background
image is less than that of the rain streaks, the network will prioritize the
reconstruction of the background, thereby avoiding overfitting to the rain
patterns and resulting in improved generalization performance. Our research not
only offers a valuable perspective and methodology for better understanding the
generalization problem in low-level vision tasks, but also displays promising
practical potential