4 research outputs found

    Segmentation Guided Image Inpainting

    Get PDF
    Deep Learning based approaches have shown promising results for the task of image inpainting. These methods have been successful in generating semantically correct and plausible inpainted images. In case of object removal, these methods require the input image to be masked roughly around the object region. The process of masking the input image causes loss of useful information as background pixels are also masked out by the rough mask. This loss of useful information makes the inpainting networks highly dependent on the mask shapes and size. The quality of the inpainted image deteriorates as the mask size increases. In our work, we propose a segmentation guided inpainting network which is not dependent on the mask shape and size for object removal. It learns to classify the foreground and background spatial locations in the mask region and uses them accordingly for the image reconstruction. This network takes the complete image as input along with the mask as a separate channel and outputs the inpainted image with the object removed. We also generate a paired dataset of image with the object and without the object which is required to train this fully supervised network

    Learning to Incorporate Structure Knowledge for Image Inpainting

    No full text
    corecore