18,213 research outputs found

    Bringing Blurry Images Alive: High-Quality Image Restoration and Video Reconstruction

    Get PDF
    Consumer-level cameras are affordable for customers. While handy and easy to use, images and videos are likely to suffer from motion blur effect, especially under low-lighting conditions. Moreover, it is rather difficult to take high frame-rate videos due to the hardware limitations of conventional RGB-sensors. Therefore, our thesis mainly focuses on restoring high-quality (sharp, and high frame-rate) images and videos, from the low-quality (blur, and low frame-rate) ones for better practical applications. In this thesis, we mainly address the problem of how to restore a sharp image from a blurred stereo video sequence, a blurred RGB-D image, or a single blurred image. Then, by utilizing the faithful information about the motion provided by blurry effects in the image, we reconstruct high frame-rate and sharp videos based on an event camera, that brings blurry frame alive. Stereo camera systems can provide motion information incorporated to help to remove complex spatially-varying motion blur in dynamic scenes. Given consecutive blurred stereo video frames, we recover the latent images, estimate the 3D scene flow, and segment the multiple moving objects simultaneously. We represent the dynamic scenes with the piece-wise planar model, which exploits the local structure of the scene and expresses various dynamic scenes. These three tasks are naturally connected under our model and expressed as the parameter estimation of 3D scene structure and camera motion (structure and motion for the dynamic scenes). To tackle the challenging, minimal image deblurring case, namely, single-image deblurring, we first focus on blur caused by camera shake during the exposure time. We propose to jointly estimate the 6 DoF camera motion and remove the non-uniform blur by exploiting their underlying geometric relationships, with a single blurred RGB-D image as input. We formulate our joint deblurring and 6 DoF camera motion estimation as an energy minimization problem solved in an alternative manner. In general cases, we solve the single-image deblurring task by studying the problem in the frequency domain. We show that the auto-correlation of the absolute phase-only image (phase-only image means the image is reconstructed only from the phase information of the blurry image) can provide faithful information about the motion (e.g., the motion direction and magnitude) that caused the blur, leading to a new and efficient blur kernel estimation approach. Event cameras are gaining attention for they measure intensity changes (called `events') with microsecond accuracy. The event camera allows the simultaneous output of the intensity frames. However, the images are captured at a relatively low frame-rate and often suffer from motion blur. A blurred image can be regarded as the integral of a sequence of latent images, while the events indicate the changes between the latent images. Therefore, we model the blur-generation process by associating event data to a latent image. We propose a simple and effective approach, the EDI model, to reconstruct a high frame-rate, sharp video (>1000 fps) from a single blurry frame and its event data. The video generation is based on solving a simple non-convex optimization problem in a single scalar variable. Then, we improved the EDI model by using multiple images and their events to handle flickering effects and noise in the generated video. Also, we provide a more efficient solver to minimize the proposed energy model. Last, the blurred image and events also contribute to optical flow estimation. We propose a single image and events based optical flow estimation approach to unlock their potential applications. In summary, this thesis addresses how to recover sharp images from blurred ones and reconstruct a high temporal resolution video from a single image and event. Our extensive experimental results demonstrate our proposed methods outperform the state-of-the-art

    Dynamic Body VSLAM with Semantic Constraints

    Full text link
    Image based reconstruction of urban environments is a challenging problem that deals with optimization of large number of variables, and has several sources of errors like the presence of dynamic objects. Since most large scale approaches make the assumption of observing static scenes, dynamic objects are relegated to the noise modeling section of such systems. This is an approach of convenience since the RANSAC based framework used to compute most multiview geometric quantities for static scenes naturally confine dynamic objects to the class of outlier measurements. However, reconstructing dynamic objects along with the static environment helps us get a complete picture of an urban environment. Such understanding can then be used for important robotic tasks like path planning for autonomous navigation, obstacle tracking and avoidance, and other areas. In this paper, we propose a system for robust SLAM that works in both static and dynamic environments. To overcome the challenge of dynamic objects in the scene, we propose a new model to incorporate semantic constraints into the reconstruction algorithm. While some of these constraints are based on multi-layered dense CRFs trained over appearance as well as motion cues, other proposed constraints can be expressed as additional terms in the bundle adjustment optimization process that does iterative refinement of 3D structure and camera / object motion trajectories. We show results on the challenging KITTI urban dataset for accuracy of motion segmentation and reconstruction of the trajectory and shape of moving objects relative to ground truth. We are able to show average relative error reduction by a significant amount for moving object trajectory reconstruction relative to state-of-the-art methods like VISO 2, as well as standard bundle adjustment algorithms

    Joint Blind Motion Deblurring and Depth Estimation of Light Field

    Full text link
    Removing camera motion blur from a single light field is a challenging task since it is highly ill-posed inverse problem. The problem becomes even worse when blur kernel varies spatially due to scene depth variation and high-order camera motion. In this paper, we propose a novel algorithm to estimate all blur model variables jointly, including latent sub-aperture image, camera motion, and scene depth from the blurred 4D light field. Exploiting multi-view nature of a light field relieves the inverse property of the optimization by utilizing strong depth cues and multi-view blur observation. The proposed joint estimation achieves high quality light field deblurring and depth estimation simultaneously under arbitrary 6-DOF camera motion and unconstrained scene depth. Intensive experiment on real and synthetic blurred light field confirms that the proposed algorithm outperforms the state-of-the-art light field deblurring and depth estimation methods
    • …
    corecore