Restoring reasonable and realistic content for arbitrary missing regions in
images is an important yet challenging task. Although recent image inpainting
models have made significant progress in generating vivid visual details, they
can still lead to texture blurring or structural distortions due to contextual
ambiguity when dealing with more complex scenes. To address this issue, we
propose the Semantic Pyramid Network (SPN) motivated by the idea that learning
multi-scale semantic priors from specific pretext tasks can greatly benefit the
recovery of locally missing content in images. SPN consists of two components.
First, it distills semantic priors from a pretext model into a multi-scale
feature pyramid, achieving a consistent understanding of the global context and
local structures. Within the prior learner, we present an optional module for
variational inference to realize probabilistic image inpainting driven by
various learned priors. The second component of SPN is a fully context-aware
image generator, which adaptively and progressively refines low-level visual
representations at multiple scales with the (stochastic) prior pyramid. We
train the prior learner and the image generator as a unified model without any
post-processing. Our approach achieves the state of the art on multiple
datasets, including Places2, Paris StreetView, CelebA, and CelebA-HQ, under
both deterministic and probabilistic inpainting setups.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl