208 research outputs found

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Superresolution imaging: A survey of current techniques

    Full text link
    Cristóbal, G., Gil, E., Šroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E, BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Subspace Representations for Robust Face and Facial Expression Recognition

    Get PDF
    Analyzing human faces and modeling their variations have always been of interest to the computer vision community. Face analysis based on 2D intensity images is a challenging problem, complicated by variations in pose, lighting, blur, and non-rigid facial deformations due to facial expressions. Among the different sources of variation, facial expressions are of interest as important channels of non-verbal communication. Facial expression analysis is also affected by changes in view-point and inter-subject variations in performing different expressions. This dissertation makes an attempt to address some of the challenges involved in developing robust algorithms for face and facial expression recognition by exploiting the idea of proper subspace representations for data. Variations in the visual appearance of an object mostly arise due to changes in illumination and pose. So we first present a video-based sequential algorithm for estimating the face albedo as an illumination-insensitive signature for face recognition. We show that by knowing/estimating the pose of the face at each frame of a sequence, the albedo can be efficiently estimated using a Kalman filter. Then we extend this to the case of unknown pose by simultaneously tracking the pose as well as updating the albedo through an efficient Bayesian inference method performed using a Rao-Blackwellized particle filter. Since understanding the effects of blur, especially motion blur, is an important problem in unconstrained visual analysis, we then propose a blur-robust recognition algorithm for faces with spatially varying blur. We model a blurred face as a weighted average of geometrically transformed instances of its clean face. We then build a matrix, for each gallery face, whose column space spans the space of all the motion blurred images obtained from the clean face. This matrix representation is then used to define a proper objective function and perform blur-robust face recognition. To develop robust and generalizable models for expression analysis one needs to break the dependence of the models on the choice of the coordinate frame of the camera. To this end, we build models for expressions on the affine shape-space (Grassmann manifold), as an approximation to the projective shape-space, by using a Riemannian interpretation of deformations that facial expressions cause on different parts of the face. This representation enables us to perform various expression analysis and recognition algorithms without the need for pose normalization as a preprocessing step. There is a large degree of inter-subject variations in performing various expressions. This poses an important challenge on developing robust facial expression recognition algorithms. To address this challenge, we propose a dictionary-based approach for facial expression analysis by decomposing expressions in terms of action units (AUs). First, we construct an AU-dictionary using domain experts' knowledge of AUs. To incorporate the high-level knowledge regarding expression decomposition and AUs, we then perform structure-preserving sparse coding by imposing two layers of grouping over AU-dictionary atoms as well as over the test image matrix columns. We use the computed sparse code matrix for each expressive face to perform expression decomposition and recognition. Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. We propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component, which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm, which benefits from the idea of sparsity and morphological diversity. The DCS algorithm uses the data-driven dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition

    Bayesian Optimization for Image Segmentation, Texture Flow Estimation and Image Deblurring

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    동적 환경 디블러링을 위한 새로운 모델, 알로기즘, 그리고 해석에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 8. 이경무.Blurring artifacts are the most common flaws in photographs. To remove these artifacts, many deblurring methods which restore sharp images from blurry ones have been studied considerably in the field of computational photography. However, state-of-the-art deblurring methods are based on a strong assumption that the captured scenes are static, and thus a great many things still remain to be done. In particular, these conventional methods fail to deblur blurry images captured in dynamic environments which have spatially varying blurs caused by various sources such as camera shake including out-of-plane motion, moving objects, depth variation, and so on. Therefore, the deblurring problem becomes more difficult and deeply challenging for dynamic scenes. Therefore, in this dissertation, addressing the deblurring problem of general dynamic scenes is a goal, and new solutions are introduced, that remove spatially varying blurs in dynamic scenes unlike conventional methods built on the assumption that the captured scenes are static. Three kinds of dynamic scene deblurring methods are proposed to achieve this goal, and they are based on: (1) segmentation, (2) sharp exemplar, (3) kernel-parametrization. The proposed approaches are introduced from segment-wise to pixel-wise approaches, and pixel-wise varying general blurs are handled in the end. First, the segmentation-based deblurring method estimates the latent image, multiple different kernels, and associated segments jointly. With the aid of the joint approach, segmentation-based method could achieve accurate blur kernel within a segment, remove segment-wise varying blurs, and reduce artifacts at the motion boundaries which are common in conventional approaches. Next, an \textit{exemplar}-based deblurring method is proposed, which utilizes a sharp exemplar to estimate highly accurate blur kernel and overcomes the limitations of the segmentation-based method that cannot handle small or texture-less segments. Lastly, the deblurring method using kernel-parametrization approximates the locally varying kernel as linear using motion flows. Thus the proposed method based on kernel-parametrization is generally applicable to remove pixel-wise varying blurs, and estimates the latent image and motion flow at the same time. With the proposed methods, significantly improved deblurring qualities are achieved, and intensive experimental evaluations demonstrate the superiority of the proposed methods in dynamic scene deblurring, in which state-of-the-art methods fail to deblur.Chapter 1 Introduction 1 Chapter 2 Image Deblurring with Segmentation 7 2.1 Introduction and Related Work 7 2.2 Segmentation-based Dynamic Scene Deblurring Model 11 2.2.1 Adaptive blur model selection 13 2.2.2 Regularization 14 2.3 Optimization 17 2.3.1 Sharp image restoration 18 2.3.2 Weight estimation 19 2.3.3 Kernel estimation 23 2.3.4 Overall procedure 25 2.4 Experiments 25 2.5 Summary 27 Chapter 3 Image Deblurring with Exemplar 33 3.1 Introduction and Related Work 35 3.2 Method Overview 37 3.3 Stage I: Exemplar Acquisition 38 3.3.1 Sharp image acquisition and preprocessing 38 3.3.2 Exemplar from blur-aware optical flow estimation 40 3.4 Stage II: Exemplar-based Deblurring 42 3.4.1 Exemplar-based latent image restoration 43 3.4.2 Motion-aware segmentation 44 3.4.3 Robust kernel estimation 45 3.4.4 Unified energy model and optimization 47 3.5 Stage III: Post-processing and Refinement 47 3.6 Experiments 49 3.7 Summary 53 Chapter 4 Image Deblurring with Kernel-Parametrization 57 4.1 Introduction and Related Work 59 4.2 Preliminary 60 4.3 Proposed Method 62 4.3.1 Image-statistics-guided motion 62 4.3.2 Adaptive variational deblurring model 64 4.4 Optimization 69 4.4.1 Motion estimation 70 4.4.2 Latent image restoration 72 4.4.3 Kernel re-initialization 73 4.5 Experiments 75 4.6 Summary 80 Chapter 5 Video Deblurring with Kernel-Parametrization 87 5.1 Introduction and Related Work 87 5.2 Generalized Video Deblurring 93 5.2.1 A new data model based on kernel-parametrization 94 5.2.2 A new optical flow constraint and temporal regularization 104 5.2.3 Spatial regularization 105 5.3 Optimization Framework 107 5.3.1 Sharp video restoration 108 5.3.2 Optical flows estimation 109 5.3.3 Defocus blur map estimation 110 5.4 Implementation Details 111 5.4.1 Initialization and duty cycle estimation 112 5.4.2 Occlusion detection and refinement 113 5.5 Motion Blur Dataset 114 5.5.1 Dataset generation 114 5.6 Experiments 116 5.7 Summary 120 Chapter 6 Conclusion 127 Bibliography 131 국문 초록 141Docto

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Underwater image restoration: super-resolution and deblurring via sparse representation and denoising by means of marine snow removal

    Get PDF
    Underwater imaging has been widely used as a tool in many fields, however, a major issue is the quality of the resulting images/videos. Due to the light's interaction with water and its constituents, the acquired underwater images/videos often suffer from a significant amount of scatter (blur, haze) and noise. In the light of these issues, this thesis considers problems of low-resolution, blurred and noisy underwater images and proposes several approaches to improve the quality of such images/video frames. Quantitative and qualitative experiments validate the success of proposed algorithms
    corecore