135 research outputs found

    Learning to Extract a Video Sequence from a Single Motion-Blurred Image

    Full text link
    We present a method to extract a video sequence from a single motion-blurred image. Motion-blurred images are the result of an averaging process, where instant frames are accumulated over time during the exposure of the sensor. Unfortunately, reversing this process is nontrivial. Firstly, averaging destroys the temporal ordering of the frames. Secondly, the recovery of a single frame is a blind deconvolution task, which is highly ill-posed. We present a deep learning scheme that gradually reconstructs a temporal ordering by sequentially extracting pairs of frames. Our main contribution is to introduce loss functions invariant to the temporal order. This lets a neural network choose during training what frame to output among the possible combinations. We also address the ill-posedness of deblurring by designing a network with a large receptive field and implemented via resampling to achieve a higher computational efficiency. Our proposed method can successfully retrieve sharp image sequences from a single motion blurred image and can generalize well on synthetic and real datasets captured with different cameras

    Pix2HDR -- A pixel-wise acquisition and deep learning-based synthesis approach for high-speed HDR videos

    Full text link
    Accurately capturing dynamic scenes with wide-ranging motion and light intensity is crucial for many vision applications. However, acquiring high-speed high dynamic range (HDR) video is challenging because the camera's frame rate restricts its dynamic range. Existing methods sacrifice speed to acquire multi-exposure frames. Yet, misaligned motion in these frames can still pose complications for HDR fusion algorithms, resulting in artifacts. Instead of frame-based exposures, we sample the videos using individual pixels at varying exposures and phase offsets. Implemented on a pixel-wise programmable image sensor, our sampling pattern simultaneously captures fast motion at a high dynamic range. We then transform pixel-wise outputs into an HDR video using end-to-end learned weights from deep neural networks, achieving high spatiotemporal resolution with minimized motion blurring. We demonstrate aliasing-free HDR video acquisition at 1000 FPS, resolving fast motion under low-light conditions and against bright backgrounds - both challenging conditions for conventional cameras. By combining the versatility of pixel-wise sampling patterns with the strength of deep neural networks at decoding complex scenes, our method greatly enhances the vision system's adaptability and performance in dynamic conditions.Comment: 14 pages, 14 figure

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Text Image Deblurring Using Kernel Sparsity Prior

    Get PDF
    Previous methods on text image motion deblurring seldom consider the sparse characteristics of the blur kernel. This paper proposes a new text image motion deblurring method by exploiting the sparse properties of both text image itself and kernel. It incorporates the Lâ‚€-norm for regularizing the blur kernel in the deblurring model, besides the Lâ‚€ sparse priors for the text image and its gradient. Such a Lâ‚€-norm-based model is efficiently optimized by half-quadratic splitting coupled with the fast conjugate descent method. To further improve the quality of the recovered kernel, a structure-preserving kernel denoising method is also developed to filter out the noisy pixels, yielding a clean kernel curve. Experimental results show the superiority of the proposed method. The source code and results are available at: https://github.com/shenjianbing/text-image-deblur

    Bringing Blurry Images Alive: High-Quality Image Restoration and Video Reconstruction

    Get PDF
    Consumer-level cameras are affordable for customers. While handy and easy to use, images and videos are likely to suffer from motion blur effect, especially under low-lighting conditions. Moreover, it is rather difficult to take high frame-rate videos due to the hardware limitations of conventional RGB-sensors. Therefore, our thesis mainly focuses on restoring high-quality (sharp, and high frame-rate) images and videos, from the low-quality (blur, and low frame-rate) ones for better practical applications. In this thesis, we mainly address the problem of how to restore a sharp image from a blurred stereo video sequence, a blurred RGB-D image, or a single blurred image. Then, by utilizing the faithful information about the motion provided by blurry effects in the image, we reconstruct high frame-rate and sharp videos based on an event camera, that brings blurry frame alive. Stereo camera systems can provide motion information incorporated to help to remove complex spatially-varying motion blur in dynamic scenes. Given consecutive blurred stereo video frames, we recover the latent images, estimate the 3D scene flow, and segment the multiple moving objects simultaneously. We represent the dynamic scenes with the piece-wise planar model, which exploits the local structure of the scene and expresses various dynamic scenes. These three tasks are naturally connected under our model and expressed as the parameter estimation of 3D scene structure and camera motion (structure and motion for the dynamic scenes). To tackle the challenging, minimal image deblurring case, namely, single-image deblurring, we first focus on blur caused by camera shake during the exposure time. We propose to jointly estimate the 6 DoF camera motion and remove the non-uniform blur by exploiting their underlying geometric relationships, with a single blurred RGB-D image as input. We formulate our joint deblurring and 6 DoF camera motion estimation as an energy minimization problem solved in an alternative manner. In general cases, we solve the single-image deblurring task by studying the problem in the frequency domain. We show that the auto-correlation of the absolute phase-only image (phase-only image means the image is reconstructed only from the phase information of the blurry image) can provide faithful information about the motion (e.g., the motion direction and magnitude) that caused the blur, leading to a new and efficient blur kernel estimation approach. Event cameras are gaining attention for they measure intensity changes (called `events') with microsecond accuracy. The event camera allows the simultaneous output of the intensity frames. However, the images are captured at a relatively low frame-rate and often suffer from motion blur. A blurred image can be regarded as the integral of a sequence of latent images, while the events indicate the changes between the latent images. Therefore, we model the blur-generation process by associating event data to a latent image. We propose a simple and effective approach, the EDI model, to reconstruct a high frame-rate, sharp video (>1000 fps) from a single blurry frame and its event data. The video generation is based on solving a simple non-convex optimization problem in a single scalar variable. Then, we improved the EDI model by using multiple images and their events to handle flickering effects and noise in the generated video. Also, we provide a more efficient solver to minimize the proposed energy model. Last, the blurred image and events also contribute to optical flow estimation. We propose a single image and events based optical flow estimation approach to unlock their potential applications. In summary, this thesis addresses how to recover sharp images from blurred ones and reconstruct a high temporal resolution video from a single image and event. Our extensive experimental results demonstrate our proposed methods outperform the state-of-the-art
    • …
    corecore