3 research outputs found
Let's Enhance: A Deep Learning Approach to Extreme Deblurring of Text Images
This work presents a novel deep-learning-based pipeline for the inverse
problem of image deblurring, leveraging augmentation and pre-training with
synthetic data. Our results build on our winning submission to the recent
Helsinki Deblur Challenge 2021, whose goal was to explore the limits of
state-of-the-art deblurring algorithms in a real-world data setting. The task
of the challenge was to deblur out-of-focus images of random text, thereby in a
downstream task, maximizing an optical-character-recognition-based score
function. A key step of our solution is the data-driven estimation of the
physical forward model describing the blur process. This enables a stream of
synthetic data, generating pairs of ground-truth and blurry images on-the-fly,
which is used for an extensive augmentation of the small amount of challenge
data provided. The actual deblurring pipeline consists of an approximate
inversion of the radial lens distortion (determined by the estimated forward
model) and a U-Net architecture, which is trained end-to-end. Our algorithm was
the only one passing the hardest challenge level, achieving over
character recognition accuracy. Our findings are well in line with the paradigm
of data-centric machine learning, and we demonstrate its effectiveness in the
context of inverse problems. Apart from a detailed presentation of our
methodology, we also analyze the importance of several design choices in a
series of ablation studies. The code of our challenge submission is available
under https://github.com/theophil-trippe/HDC_TUBerlin_version_1.Comment: This article has been published in a revised form in Inverse Problems
and Imagin
Defocus Blur Detection and Estimation from Imaging Sensors
Sparse representation has been proven to be a very effective technique for various image restoration applications. In this paper, an improved sparse representation based method is proposed to detect and estimate defocus blur of imaging sensors. Considering the fact that the patterns usually vary remarkably across different images or different patches in a single image, it is unstable and time-consuming for sparse representation over an over-complete dictionary. We propose an adaptive domain selection scheme to prelearn a set of compact dictionaries and adaptively select the optimal dictionary to each image patch. Then, with nonlocal structure similarity, the proposed method learns nonzero-mean coefficients’ distributions that are much more closer to the real ones. More accurate sparse coefficients can be obtained and further improve the performance of results. Experimental results validate that the proposed method outperforms existing defocus blur estimation approaches, both qualitatively and quantitatively
Defocus Blur Detection and Estimation from Imaging Sensors
Sparse representation has been proven to be a very effective technique for various image restoration applications. In this paper, an improved sparse representation based method is proposed to detect and estimate defocus blur of imaging sensors. Considering the fact that the patterns usually vary remarkably across different images or different patches in a single image, it is unstable and time-consuming for sparse representation over an over-complete dictionary. We propose an adaptive domain selection scheme to prelearn a set of compact dictionaries and adaptively select the optimal dictionary to each image patch. Then, with nonlocal structure similarity, the proposed method learns nonzero-mean coefficients’ distributions that are much more closer to the real ones. More accurate sparse coefficients can be obtained and further improve the performance of results. Experimental results validate that the proposed method outperforms existing defocus blur estimation approaches, both qualitatively and quantitatively