1 research outputs found
Generative Single Image Reflection Separation
Single image reflection separation is an ill-posed problem since two scenes,
a transmitted scene and a reflected scene, need to be inferred from a single
observation. To make the problem tractable, in this work we assume that
categories of two scenes are known. It allows us to address the problem by
generating both scenes that belong to the categories while their contents are
constrained to match with the observed image. A novel network architecture is
proposed to render realistic images of both scenes based on adversarial
learning. The network can be trained in a weakly supervised manner, i.e., it
learns to separate an observed image without corresponding ground truth images
of transmission and reflection scenes which are difficult to collect in
practice. Experimental results on real and synthetic datasets demonstrate that
the proposed algorithm performs favorably against existing methods