2,040 research outputs found
Motion Offset for Blur Modeling
Motion blur caused by the relative movement between the camera and the subject is often an undesirable degradation of the image quality. In most conventional deblurring methods, a blur kernel is estimated for image deconvolution. Due to the ill-posed nature, predefined priors are proposed to suppress the ill-posedness. However, these predefined priors can only handle some specific situations. In order to achieve a better deblurring performance on dynamic scene, deep-learning based methods are proposed to learn a mapping function that restore the sharp image from a blurry image. The blur may be implicitly modelled in feature extraction module. However, the blur modelled from the paired dataset cannot be well generalized to some real-world scenes. To summary, an accurate and dynamic blur model that more closely approximates real-world blur is needed.
By revisiting the principle of camera exposure, we can model the blur with the displacements between sharp pixels and the exposed pixel, namely motion offsets. Given specific physical constraints, motion offsets are able to form different exposure trajectories (i.e. linear, quadratic). Compare to conventional blur kernel, our proposed motion offsets are a more rigorous approximation for real-world blur, since they can constitute a non-linear and non-uniform motion field. Through learning from dynamic scene dataset, an accurate and spatial-variant motion offset field is obtained.
With accurate motion information and a compact blur modeling method, we explore the ways of utilizing motion information to facilitate multiple blur-related tasks. By introducing recovered motion offsets, we build up a motion-aware and spatial-variant convolution. For extracting a video clip from a blurry image, motion offsets can provide an explicit (non-)linear motion trajectory for interpolating. We also work towards a better image deblurring performance in real-world scenarios by improving the generalization ability of the deblurring model
Continuous Facial Motion Deblurring
We introduce a novel framework for continuous facial motion deblurring that
restores the continuous sharp moment latent in a single motion-blurred face
image via a moment control factor. Although a motion-blurred image is the
accumulated signal of continuous sharp moments during the exposure time, most
existing single image deblurring approaches aim to restore a fixed number of
frames using multiple networks and training stages. To address this problem, we
propose a continuous facial motion deblurring network based on GAN (CFMD-GAN),
which is a novel framework for restoring the continuous moment latent in a
single motion-blurred face image with a single network and a single training
stage. To stabilize the network training, we train the generator to restore
continuous moments in the order determined by our facial motion-based
reordering process (FMR) utilizing domain-specific knowledge of the face.
Moreover, we propose an auxiliary regressor that helps our generator produce
more accurate images by estimating continuous sharp moments. Furthermore, we
introduce a control-adaptive (ContAda) block that performs spatially deformable
convolution and channel-wise attention as a function of the control factor.
Extensive experiments on the 300VW datasets demonstrate that the proposed
framework generates a various number of continuous output frames by varying the
moment control factor. Compared with the recent single-to-single image
deblurring networks trained with the same 300VW training set, the proposed
method show the superior performance in restoring the central sharp frame in
terms of perceptual metrics, including LPIPS, FID and Arcface identity
distance. The proposed method outperforms the existing single-to-video
deblurring method for both qualitative and quantitative comparisons
Alignment parameter calibration for IMU using the Taguchi method for image deblurring
Inertial measurement units (IMUs) utilized in smartphones can be used to detect camera motion during exposure, in order to improve image quality degraded with blur through long hand-held exposure. Based on the captured camera motion, blur in images can be removed when an appropriate deblurring filter is used. However, two research issues have not been addressed: (a) the calibration of alignment parameters for the IMU has not been addressed. When inappropriate alignment parameters are used for the IMU, the camera motion would not be captured accurately and the deblurring effectiveness can be downgraded. (b) Also selection of an appropriate deblurring filter correlated with the image quality has still not been addressed. Without the use of an appropriate deblurring filter, the image quality could not be optimal. This paper proposes a systematic method, namely the Taguchi method, which is a robust and systematic approach for designing reliable and high-precision devices, in order to perform the alignment parameter calibration for the IMU and filter selection. The Taguchi method conducts a small number of systematic experiments based on orthogonal arrays. It studies the impact of the alignment parameters and appropriate deblurring filter, which attempts to perform an effective deblurring. Several widely adopted image quality metrics are used to evaluate the deblurred images generated by the proposed Taguchi method. Experimental results show that the quality of deblurred images achieved by the proposed Taguchi method is better than those obtained by deblurring methods which are not involved with the alignment parameter calibration and filter selection. Also, much less computational effort is required by the Taguchi method when comparing with the commonly used optimization methods for determining alignment parameters and deblurring filter
Blurring and deblurring digital images using the dihedral group
A new method of blurring and deblurring digital images is presented. The approach is based on using new filters generating from average filter and H-filters using the action of the dihedral group. These filters are called HB-filters; used to cause a motion blur and then deblurring affected images. Also, enhancing images using HB-filters is presented as compared to other methods like Average, Gaussian, and Motion. Results and analysis show that the HB-filters are better in peak signal to noise ratio (PSNR) and RMSE
Application of Ghost-DeblurGAN to Fiducial Marker Detection
Feature extraction or localization based on the fiducial marker could fail
due to motion blur in real-world robotic applications. To solve this problem, a
lightweight generative adversarial network, named Ghost-DeblurGAN, for
real-time motion deblurring is developed in this paper. Furthermore, on account
that there is no existing deblurring benchmark for such task, a new large-scale
dataset, YorkTag, is proposed that provides pairs of sharp/blurred images
containing fiducial markers. With the proposed model trained and tested on
YorkTag, it is demonstrated that when applied along with fiducial marker
systems to motion-blurred images, Ghost-DeblurGAN improves the marker detection
significantly. The datasets and codes used in this paper are available at:
https://github.com/York-SDCNLab/Ghost-DeblurGAN.Comment: 6 pages, 6 figure
- …