23,755 research outputs found

    Real Time Turbulent Video Perfecting by Image Stabilization and Super-Resolution

    Full text link
    Image and video quality in Long Range Observation Systems (LOROS) suffer from atmospheric turbulence that causes small neighbourhoods in image frames to chaotically move in different directions and substantially hampers visual analysis of such image and video sequences. The paper presents a real-time algorithm for perfecting turbulence degraded videos by means of stabilization and resolution enhancement. The latter is achieved by exploiting the turbulent motion. The algorithm involves generation of a reference frame and estimation, for each incoming video frame, of a local image displacement map with respect to the reference frame; segmentation of the displacement map into two classes: stationary and moving objects and resolution enhancement of stationary objects, while preserving real motion. Experiments with synthetic and real-life sequences have shown that the enhanced videos, generated in real time, exhibit substantially better resolution and complete stabilization for stationary objects while retaining real motion.Comment: Submitted to The Seventh IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2007) August, 2007 Palma de Mallorca, Spai

    Embedded video stabilization system on field programmable gate array for unmanned aerial vehicle

    Get PDF
    Unmanned Aerial Vehicles (UAVs) equipped with lightweight and low-cost cameras have grown in popularity and enable new applications of UAV technology. However, the video retrieved from small size UAVs is normally in low-quality due to high frequency jitter. This thesis presents the development of video stabilization algorithm implemented on Field Programmable Gate Array (FPGA). The video stabilization algorithm consists of three main processes, which are motion estimation, motion stabilization and motion compensation to minimize the jitter. Motion estimation involves block matching and Random Sample Consensus (RANSAC) to estimate the affine matrix that defines the motion perspective between two consecutive frames. Then, parameter extraction, motion smoothing and motion vector correction, which are parts of the motion stabilization, are tasked in removing unwanted camera movement. Finally, motion compensation stabilizes two consecutive frames based on filtered motion vectors. In order to facilitate the ground station mobility, this algorithm needs to be processed onboard the UAV in real-time. The nature of parallelization of video stabilization processing is suitable to be utilized by using FPGA in order to achieve real-time capability. The implementation of this system is on Altera DE2-115 FPGA board. Full hardware dedicated cores without Nios II processor are designed in stream-oriented architecture to accelerate the computation. Furthermore, a parallelized architecture consisting of block matching and highly parameterizable RANSAC processor modules show that the proposed system is able to achieve up to 30 frames per second processing and a good stabilization improvement up to 1.78 Interframe Transformation Fidelity value. Hence, it is concluded that the proposed system is suitable for real-time video stabilization for UAV application

    A Video Stabilization Method Based on Inter-frame Image Matching Scorea

    Get PDF
    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. In this paper, we propose an robust and efficient video stabilization algorithm based on inter-frame image matching score. Firstly, image matching is performed by a method combining Maximally Stable Extremal Regions (MSERs) detection algorithm and Features from Accelerated Segment Test (FAST) corner detection algorithm, which can get the matching score and the motion parameters of the frame image. Then, the matching score is filtered to filter out the high frequency component and keep the low frequency component. And the motion compensation is performed on the current frame image according to the ratio of the matching score before and after the filtering to retain the global motion and remove the local jitter. Various classical corner detection operators and region matching operators are compared in experiments. And experimental results illustrate that the proposed method is effective to stabilize translational, rotational, and zooming jitter and robust to local motions, and has the state-of-the-art processing speed to meet the needs of real-time equipment

    Thermal infrared video stabilization for aerial monitoring of active wildfires

    Get PDF
    Measuring wildland fire behavior is essential for fire science and fire management. Aerial thermal infrared (TIR) imaging provides outstanding opportunities to acquire such information remotely. Variables such as fire rate of spread (ROS), fire radiative power (FRP), and fireline intensity may be measured explicitly both in time and space, providing the necessary data to study the response of fire behavior to weather, vegetation, topography, and firefighting efforts. However, raw TIR imagery acquired by unmanned aerial vehicles (UAVs) requires stabilization and georeferencing before any other processing can be performed. Aerial video usually suffers from instabilities produced by sensor movement. This problem is especially acute near an active wildfire due to fire-generated turbulence. Furthermore, the nature of fire TIR video presents some specific challenges that hinder robust interframe registration. Therefore, this article presents a software-based video stabilization algorithm specifically designed for TIR imagery of forest fires. After a comparative analysis of existing image registration algorithms, the KAZE feature-matching method was selected and accompanied by pre- and postprocessing modules. These included foreground histogram equalization and a multireference framework designed to increase the algorithm's robustness in the presence of missing or faulty frames. The performance of the proposed algorithm was validated in a total of nine video sequences acquired during field fire experiments. The proposed algorithm yielded a registration accuracy between 10 and 1000x higher than other tested methods, returned 10x more meaningful feature matches, and proved robust in the presence of faulty video frames. The ability to automatically cancel camera movement for every frame in a video sequence solves a key limitation in data processing pipelines and opens the door to a number of systematic fire behavior experimental analyses. Moreover, a completely automated process supports the development of decision support tools that can operate in real time during an emergency

    Real-time model-based video stabilization for microaerial vehicles

    Get PDF
    The emerging branch of micro aerial vehicles (MAVs) has attracted a great interest for their indoor navigation capabilities, but they require a high quality video for tele-operated or autonomous tasks. A common problem of on-board video quality is the effect of undesired movements, so different approaches solve it with both mechanical stabilizers or video stabilizer software. Very few video stabilizer algorithms in the literature can be applied in real-time but they do not discriminate at all between intentional movements of the tele-operator and undesired ones. In this paper, a novel technique is introduced for real-time video stabilization with low computational cost, without generating false movements or decreasing the performance of the stabilized video sequence. Our proposal uses a combination of geometric transformations and outliers rejection to obtain a robust inter-frame motion estimation, and a Kalman filter based on an ANN learned model of the MAV that includes the control action for motion intention estimation.Peer ReviewedPostprint (author's final draft

    Fast Video Stabilization Algorithms

    Get PDF
    A fast and robust electronic video stabilization algorithm is presented in this thesis. It is based on a two-dimensional feature-based motion estimation technique. The method tracks a small set of features and estimates the movement of the camera between consecutive frames. It is used to characterize the motions accurately including camera rotations between two imaging instants. An affine motion model is utilized to determine the parameters of translation and rotation between images. The determined affine transformation is then exploited to compensate for the abrupt temporal discontinuities of input image sequences. Also, a frequency domain approach is developed to estimate translations between two consecutive frames in a video sequence. Finally, a jitter detection technique to isolate vibration affected subsequence of an image sequence is presented. The experimental results of using both simulated and real images have revealed the applicability of the proposed techniques. In particular, the emphasis has been to develop real time implementable algorithms, suitable for unmanned vehicles with severe payload constraints

    Implementation and Validation of Video Stabilization using Simulink

    Get PDF
    A fast video stabilization technique based on Gray-coded bit-plane (GCBP) matching for translational motion is implemented and tested using various image sequences. This technique performs motion estimation using GCBP of image sequences which greatly reduces the computational load. In order to further improve computational efficiency, the three-step search (TSS) is used along with GCBP matching to perform a competent search during correlation measure calculation. The entire technique has been implemented in Simulink to perform in real-time

    Mapping Wide Row Crops with Video Sequences Acquired from a Tractor Moving at Treatment Speed

    Get PDF
    This paper presents a mapping method for wide row crop fields. The resulting map shows the crop rows and weeds present in the inter-row spacing. Because field videos are acquired with a camera mounted on top of an agricultural vehicle, a method for image sequence stabilization was needed and consequently designed and developed. The proposed stabilization method uses the centers of some crop rows in the image sequence as features to be tracked, which compensates for the lateral movement (sway) of the camera and leaves the pitch unchanged. A region of interest is selected using the tracked features, and an inverse perspective technique transforms the selected region into a bird’s-eye view that is centered on the image and that enables map generation. The algorithm developed has been tested on several video sequences of different fields recorded at different times and under different lighting conditions, with good initial results. Indeed, lateral displacements of up to 66% of the inter-row spacing were suppressed through the stabilization process, and crop rows in the resulting maps appear straight
    corecore