One tough problem of image inpainting is to restore complex structures in the
corrupted regions. It motivates interactive image inpainting which leverages
additional hints, e.g., sketches, to assist the inpainting process. Sketch is
simple and intuitive to end users, but meanwhile has free forms with much
randomness. Such randomness may confuse the inpainting models, and incur severe
artifacts in completed images. To address this problem, we propose a two-stage
image inpainting method termed SketchRefiner. In the first stage, we propose
using a cross-correlation loss function to robustly calibrate and refine the
user-provided sketches in a coarse-to-fine fashion. In the second stage, we
learn to extract informative features from the abstracted sketches in the
feature space and modulate the inpainting process. We also propose an algorithm
to simulate real sketches automatically and build a test protocol with
different applications. Experimental results on public datasets demonstrate
that SketchRefiner effectively utilizes sketch information and eliminates the
artifacts due to the free-form sketches. Our method consistently outperforms
the state-of-the-art ones both qualitatively and quantitatively, meanwhile
revealing great potential in real-world applications. Our code and dataset are
available