8,318 research outputs found

    Learning to Deblur Images with Exemplars

    Full text link
    Human faces are one interesting object class with numerous applications. While significant progress has been made in the generic deblurring problem, existing methods are less effective for blurry face images. The success of the state-of-the-art image deblurring algorithms stems mainly from implicit or explicit restoration of salient edges for kernel estimation. However, existing methods are less effective as only few edges can be restored from blurry face images for kernel estimation. In this paper, we address the problem of deblurring face images by exploiting facial structures. We propose a deblurring algorithm based on an exemplar dataset without using coarse-to-fine strategies or heuristic edge selections. In addition, we develop a convolutional neural network to restore sharp edges from blurry images for deblurring. Extensive experiments against the state-of-the-art methods demonstrate the effectiveness of the proposed algorithms for deblurring face images. In addition, we show the proposed algorithms can be applied to image deblurring for other object classes.Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence 201

    Blur Removal via Blurred-Noisy Image Pair

    Full text link
    Complex blur such as the mixup of space-variant and space-invariant blur, which is hard to model mathematically, widely exists in real images. In this paper, we propose a novel image deblurring method that does not need to estimate blur kernels. We utilize a pair of images that can be easily acquired in low-light situations: (1) a blurred image taken with low shutter speed and low ISO noise; and (2) a noisy image captured with high shutter speed and high ISO noise. Slicing the blurred image into patches, we extend the Gaussian mixture model (GMM) to model the underlying intensity distribution of each patch using the corresponding patches in the noisy image. We compute patch correspondences by analyzing the optical flow between the two images. The Expectation Maximization (EM) algorithm is utilized to estimate the parameters of GMM. To preserve sharp features, we add an additional bilateral term to the objective function in the M-step. We eventually add a detail layer to the deblurred image for refinement. Extensive experiments on both synthetic and real-world data demonstrate that our method outperforms state-of-the-art techniques, in terms of robustness, visual quality, and quantitative metrics

    Kernel Estimation from Salient Structure for Robust Motion Deblurring

    Full text link
    Blind image deblurring algorithms have been improving steadily in the past years. Most state-of-the-art algorithms, however, still cannot perform perfectly in challenging cases, especially in large blur setting. In this paper, we focus on how to estimate a good kernel estimate from a single blurred image based on the image structure. We found that image details caused by blurring could adversely affect the kernel estimation, especially when the blur kernel is large. One effective way to eliminate these details is to apply image denoising model based on the Total Variation (TV). First, we developed a novel method for computing image structures based on TV model, such that the structures undermining the kernel estimation will be removed. Second, to mitigate the possible adverse effect of salient edges and improve the robustness of kernel estimation, we applied a gradient selection method. Third, we proposed a novel kernel estimation method, which is capable of preserving the continuity and sparsity of the kernel and reducing the noises. Finally, we developed an adaptive weighted spatial prior, for the purpose of preserving sharp edges in latent image restoration. The effectiveness of our method is demonstrated by experiments on various kinds of challenging examples.Comment: This work has been accepted by Signal Processing: Image Communication, 201

    Blur Robust Optical Flow using Motion Channel

    Full text link
    It is hard to estimate optical flow given a realworld video sequence with camera shake and other motion blur. In this paper, we first investigate the blur parameterization for video footage using near linear motion elements. we then combine a commercial 3D pose sensor with an RGB camera, in order to film video footage of interest together with the camera motion. We illustrates that this additional camera motion/trajectory channel can be embedded into a hybrid framework by interleaving an iterative blind deconvolution and warping based optical flow scheme. Our method yields improved accuracy within three other state-of-the-art baselines given our proposed ground truth blurry sequences; and several other realworld sequences filmed by our imaging system.Comment: Preprint of our paper accepted by Neurocomputin

    Deep Algorithm Unrolling for Blind Image Deblurring

    Full text link
    Blind image deblurring remains a topic of enduring interest. Learning based approaches, especially those that employ neural networks have emerged to complement traditional model based methods and in many cases achieve vastly enhanced performance. That said, neural network approaches are generally empirically designed and the underlying structures are difficult to interpret. In recent years, a promising technique called algorithm unrolling has been developed that has helped connect iterative algorithms such as those for sparse coding to neural network architectures. However, such connections have not been made yet for blind image deblurring. In this paper, we propose a neural network architecture based on this idea. We first present an iterative algorithm that may be considered as a generalization of the traditional total-variation regularization method in the gradient domain. We then unroll the algorithm to construct a neural network for image deblurring which we refer to as Deep Unrolling for Blind Deblurring (DUBLID). Key algorithm parameters are learned with the help of training images. Our proposed deep network DUBLID achieves significant practical performance gains while enjoying interpretability at the same time. Extensive experimental results show that DUBLID outperforms many state-of-the-art methods and in addition is computationally faster

    Learn to Model Motion from Blurry Footages

    Full text link
    It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modelling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.Comment: Preprint of our paper accepted by Pattern Recognitio

    Removing Camera Shake via Weighted Fourier Burst Accumulation

    Full text link
    Numerous recent approaches attempt to remove image blur due to camera shake, either with one or multiple input images, by explicitly solving an inverse and inherently ill-posed deconvolution problem. If the photographer takes a burst of images, a modality available in virtually all modern digital cameras, we show that it is possible to combine them to get a clean sharp version. This is done without explicitly solving any blur estimation and subsequent inverse problem. The proposed algorithm is strikingly simple: it performs a weighted average in the Fourier domain, with weights depending on the Fourier spectrum magnitude. The method can be seen as a generalization of the align and average procedure, with a weighted average, motivated by hand-shake physiology and theoretically supported, taking place in the Fourier domain. The method's rationale is that camera shake has a random nature and therefore each image in the burst is generally blurred differently. Experiments with real camera data, and extensive comparisons, show that the proposed Fourier Burst Accumulation (FBA) algorithm achieves state-of-the-art results an order of magnitude faster, with simplicity for on-board implementation on camera phones. Finally, we also present experiments in real high dynamic range (HDR) scenes, showing how the method can be straightforwardly extended to HDR photography.Comment: Errata with respect to published version: Algorithm 1, lines 9 and 10: w_i is replaced by w^p_i (as was correctly stated in Eq (9)

    Blind Deconvolution with Non-local Sparsity Reweighting

    Full text link
    Blind deconvolution has made significant progress in the past decade. Most successful algorithms are classified either as Variational or Maximum a-Posteriori (MAPMAP). In spite of the superior theoretical justification of variational techniques, carefully constructed MAPMAP algorithms have proven equally effective in practice. In this paper, we show that all successful MAPMAP and variational algorithms share a common framework, relying on the following key principles: sparsity promotion in the gradient domain, l2l_2 regularization for kernel estimation, and the use of convex (often quadratic) cost functions. Our observations lead to a unified understanding of the principles required for successful blind deconvolution. We incorporate these principles into a novel algorithm that improves significantly upon the state of the art.Comment: 19 page

    Blurred Image Classification based on Adaptive Dictionary

    Full text link
    Two types of framework for blurred image classification based on adaptive dictionary are proposed. Given a blurred image, instead of image deblurring, the semantic category of the image is determined by blur insensitive sparse coefficients calculated depending on an adaptive dictionary. The dictionary is adaptive to the Point Spread Function (PSF) estimated from input blurred image. The PSF is assumed to be space invariant and inferred separately in one framework or updated combining with sparse coefficients calculation in an alternative and iterative algorithm in the other framework. The experiment has evaluated three types of blur, naming defocus blur, simple motion blur and camera shake blur. The experiment results confirm the effectiveness of the proposed frameworks.Comment: 10 pages,2 figure

    Sparse Representation of a Blur Kernel for Blind Image Restoration

    Full text link
    Blind image restoration is a non-convex problem which involves restoration of images from an unknown blur kernel. The factors affecting the performance of this restoration are how much prior information about an image and a blur kernel are provided and what algorithm is used to perform the restoration task. Prior information on images is often employed to restore the sharpness of the edges of an image. By contrast, no consensus is still present regarding what prior information to use in restoring from a blur kernel due to complex image blurring processes. In this paper, we propose modelling of a blur kernel as a sparse linear combinations of basic 2-D patterns. Our approach has a competitive edge over the existing blur kernel modelling methods because our method has the flexibility to customize the dictionary design, which makes it well-adaptive to a variety of applications. As a demonstration, we construct a dictionary formed by basic patterns derived from the Kronecker product of Gaussian sequences. We also compare our results with those derived by other state-of-the-art methods, in terms of peak signal to noise ratio (PSNR).Comment: 11 pages, 37 figure
    • …
    corecore