858 research outputs found
A deep learning framework for quality assessment and restoration in video endoscopy
Endoscopy is a routine imaging technique used for both diagnosis and
minimally invasive surgical treatment. Artifacts such as motion blur, bubbles,
specular reflections, floating objects and pixel saturation impede the visual
interpretation and the automated analysis of endoscopy videos. Given the
widespread use of endoscopy in different clinical applications, we contend that
the robust and reliable identification of such artifacts and the automated
restoration of corrupted video frames is a fundamental medical imaging problem.
Existing state-of-the-art methods only deal with the detection and restoration
of selected artifacts. However, typically endoscopy videos contain numerous
artifacts which motivates to establish a comprehensive solution.
We propose a fully automatic framework that can: 1) detect and classify six
different primary artifacts, 2) provide a quality score for each frame and 3)
restore mildly corrupted frames. To detect different artifacts our framework
exploits fast multi-scale, single stage convolutional neural network detector.
We introduce a quality metric to assess frame quality and predict image
restoration success. Generative adversarial networks with carefully chosen
regularization are finally used to restore corrupted frames.
Our detector yields the highest mean average precision (mAP at 5% threshold)
of 49.0 and the lowest computational time of 88 ms allowing for accurate
real-time processing. Our restoration models for blind deblurring, saturation
correction and inpainting demonstrate significant improvements over previous
methods. On a set of 10 test videos we show that our approach preserves an
average of 68.7% which is 25% more frames than that retained from the raw
videos.Comment: 14 page
RIBBONS: Rapid Inpainting Based on Browsing of Neighborhood Statistics
Image inpainting refers to filling missing places in images using neighboring
pixels. It also has many applications in different tasks of image processing.
Most of these applications enhance the image quality by significant unwanted
changes or even elimination of some existing pixels. These changes require
considerable computational complexities which in turn results in remarkable
processing time. In this paper we propose a fast inpainting algorithm called
RIBBONS based on selection of patches around each missing pixel. This would
accelerate the execution speed and the capability of online frame inpainting in
video. The applied cost-function is a combination of statistical and spatial
features in all neighboring pixels. We evaluate some candidate patches using
the proposed cost function and minimize it to achieve the final patch.
Experimental results show the higher speed of 'Ribbons' in comparison with
previous methods while being comparable in terms of PSNR and SSIM for the
images in MISC dataset
- …