315 research outputs found

    Full Frame Video Stabilization Using Motion Inpainting

    Get PDF
    The amount of video data has increased dramatically with the advent of digital imaging. Most of the video captured these days originates from a mobile phones and handheld video cameras. Such videos are shaky compared to videos that are shot with a tripod mounted camera. Stabilizing this video to remove the shaky effect using software is called Digital video stabilization which results in a stable and visually pleasant video. In order digitally stabilize the image, we need to (1) Estimate the motion of camera, (2) Regenerate the motion of camera without the undesirable artifacts and (3) Synthesize new video frames. This dissertation is targeted at improving the last two steps of stabilizing the video. Most of the previous techniques of video stabilization produce a lower resolution stabilized video output and clip portions of frames to remove the empty area formed by transformation of the video frames. We use a Gaussian averaging filter to smoother the global motion in the video. Then the frames are transformed using the new transformation matrices obtained by subtracting the original transformation chain from the modified transformation chain. For the last step of synthesizing new video frames, we introduce an improved completion technique which can produce full frame video by using the pixel information from nearby frames to estimate the intensity of the missing pixels. This technique uses motion inpainting to ensure that the video frames are filled in both the static image area and dynamic image area with the same consistency. Additionally, the quality of the video is improved by using a deblurring algorithm which further improves the smoothness of video by eliminating undesirable motion blur. We do not estimate the PSF, in its place, we transfer and interpolate the sharper pixels from nearby frames to improve the sharpness and deblur current frame. Completing the video with motion inpainting and deblurring technique allow us to construct a full frame video stabilization system with good image quality. This is verified by implementing the technique on different video sequences

    High-speed Video from Asynchronous Camera Array

    Get PDF
    This paper presents a method for capturing high-speed video using an asynchronous camera array. Our method sequentially fires each sensor in a camera array with a small time offset and assembles captured frames into a high-speed video according to the time stamps. The resulting video, however, suffers from parallax jittering caused by the viewpoint difference among sensors in the camera array. To address this problem, we develop a dedicated novel view synthesis algorithm that transforms the video frames as if they were captured by a single reference sensor. Specifically, for any frame from a non-reference sensor, we find the two temporally neighboring frames captured by the reference sensor. Using these three frames, we render a new frame with the same time stamp as the non-reference frame but from the viewpoint of the reference sensor. Specifically, we segment these frames into super-pixels and then apply local content-preserving warping to warp them to form the new frame. We employ a multi-label Markov Random Field method to blend these warped frames. Our experiments show that our method can produce high-quality and high-speed video of a wide variety of scenes with large parallax, scene dynamics, and camera motion and outperforms several baseline and state-of-the-art approaches.Comment: 10 pages, 82 figures, Published at IEEE WACV 201

    Image enhancement from a stabilised video sequence

    Get PDF
    The aim of video stabilisation is to create a new video sequence where the motions (i.e. rotations, translations) and scale differences between frames (or parts of a frame) have effectively been removed. These stabilisation effects can be obtained via digital video processing techniques which use the information extracted from the video sequence itself, with no need for additional hardware or knowledge about camera physical motion. A video sequence usually contains a large overlap between successive frames, and regions of the same scene are sampled at different positions. In this paper, this multiple sampling is combined to achieve images with a higher spatial resolution. Higher resolution imagery play an important role in assisting in the identification of people, vehicles, structures or objects of interest captured by surveillance cameras or by video cameras used in face recognition, traffic monitoring, traffic law reinforcement, driver assistance and automatic vehicle guidance systems

    Fast Full-frame Video Stabilization with Iterative Optimization

    Full text link
    Video stabilization refers to the problem of transforming a shaky video into a visually pleasing one. The question of how to strike a good trade-off between visual quality and computational speed has remained one of the open challenges in video stabilization. Inspired by the analogy between wobbly frames and jigsaw puzzles, we propose an iterative optimization-based learning approach using synthetic datasets for video stabilization, which consists of two interacting submodules: motion trajectory smoothing and full-frame outpainting. First, we develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field. The confidence map associated with the estimated optical flow is exploited to guide the search for shared regions through backpropagation. Second, we take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views. An important new insight brought about by our iterative optimization approach is that the target video can be interpreted as the fixed point of nonlinear mapping for video stabilization. We formulate video stabilization as a problem of minimizing the amount of jerkiness in motion trajectories, which guarantees convergence with the help of fixed-point theory. Extensive experimental results are reported to demonstrate the superiority of the proposed approach in terms of computational speed and visual quality. The code will be available on GitHub.Comment: Accepted by ICCV202
    corecore