4 research outputs found
Recommended from our members
SA-RFR: self-attention based recurrent feature reasoning for image inpainting with large missing area
With the recent emergence of artificial intelligence, deep learning image inpainting methods have achieved fruitful results. These methods generated plausible structures and textures in repairing images with small missing areas. When inpainting an image with an excessively large missing area (the mask ratio is more than 50%), however, it usually produces a distorted structure or a fuzzy texture that is inconsistent with the surrounding area. Therefore, we propose a self-attention based recurrent feature reasoning (SA-RFR) network. First, SA-RFR uses self-attention (SA) to enhance the correlation between known pixels and unknown pixels and the constraints on the hole center, so that the repaired content details are clearer and the edges are smoother. In addition, because ordinary convolution has feature redundancy for the generated feature map, some unnecessary information is generated, and some models are difficult to train. Therefore, we also propose an adaptive ghost convolution (AGC) to replace part of the ordinary convolution. Using the PReLu activation function instead of the ReLu activation function in the ghost module, AGC can effectively improve the overfitting problem of the model and the quality of the repaired image without increasing the computational cost. The proposed model has undergone extensive experiments on several public datasets, and the results show that our method is superior to the state-of-the-art methods
Segmentation Guided Image Inpainting
Deep Learning based approaches have shown promising results for the task of image inpainting. These methods have been successful in generating semantically correct and plausible inpainted images. In case of object removal, these methods require the input image to be masked roughly around the object region. The process of masking the input image causes loss of useful information as background pixels are also masked out by the rough mask. This loss of useful information makes the inpainting networks highly dependent on the mask shapes and size. The quality of the inpainted image deteriorates as the mask size increases. In our work, we propose a segmentation guided inpainting network which is not dependent on the mask shape and size for object removal. It learns to classify the foreground and background spatial locations in the mask region and uses them accordingly for the image reconstruction. This network takes the complete image as input along with the mask as a separate channel and outputs the inpainted image with the object removed. We also generate a paired dataset of image with the object and without the object which is required to train this fully supervised network