12,760 research outputs found
Implementation and Validation of Video Stabilization using Simulink
A fast video stabilization technique based on Gray-coded bit-plane (GCBP) matching for translational motion is implemented and tested using various image sequences. This technique performs motion estimation using GCBP of image sequences which greatly reduces the computational load. In order to further improve computational efficiency, the three-step search (TSS) is used along with GCBP matching to perform a competent search during correlation measure calculation. The entire technique has been implemented in Simulink to perform in real-time
Automatic Feature-Based Stabilization of Video with Intentional Motion through a Particle Filter
Video sequences acquired by a camera mounted on a hand held device or a mobile platform are affected by unwanted shakes and jitters. In this situation, the performance of video applications, such us motion segmentation and tracking, might dramatically be decreased. Several digital video stabilization approaches have been proposed to overcome this problem. However, they are mainly based on motion estimation techniques that are prone to errors, and thus affecting the stabilization performance. On the other hand, these techniques can only obtain a successfully stabilization if the intentional camera motion is smooth, since they incorrectly filter abrupt changes in the intentional motion. In this paper a novel video stabilization technique that overcomes the aforementioned problems is presented. The motion is estimated by means of a sophisticated feature-based technique that is robust to errors, which could bias the estimation. The unwanted camera motion is filtered, while the intentional motion is successfully preserved thanks to a Particle Filter framework that is able to deal with abrupt changes in the intentional motion. The obtained results confirm the effectiveness of the proposed algorith
EyeRIS: A General-Purpose System for Eye Movement Contingent Display Control
In experimental studies of visual performance, the need often emerges to modify the stimulus according to the eye movements perfonncd by the subject. The methodology of Eye Movement-Contingent Display (EMCD) enables accurate control of the position and motion of the stimulus on the retina. EMCD procedures have been used successfully in many areas of vision science, including studies of visual attention, eye movements, and physiological characterization of neuronal response properties. Unfortunately, the difficulty of real-time programming and the unavailability of flexible and economical systems that can be easily adapted to the diversity of experimental needs and laboratory setups have prevented the widespread use of EMCD control. This paper describes EyeRIS, a general-purpose system for performing EMCD experiments on a Windows computer. Based on a digital signal processor with analog and digital interfaces, this integrated hardware and software system is responsible for sampling and processing oculomotor signals and subject responses and modifying the stimulus displayed on a CRT according to the gaze-contingent procedure specified by the experimenter. EyeRIS is designed to update the stimulus within a delay of 10 ms. To thoroughly evaluate EyeRIS' perforltlancc, this study (a) examines the response of the system in a number of EMCD procedures and computational benchmarking tests, (b) compares the accuracy of implementation of one particular EMCD procedure, retinal stabilization, to that produced by a standard tool used for this task, and (c) examines EyeRIS' performance in one of the many EMCD procedures that cannot be executed by means of any other currently available device.National Institute of Health (EY15732-01
Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation
The photo-response non-uniformity (PRNU) is a distinctive image sensor
characteristic, and an imaging device inadvertently introduces its sensor's
PRNU into all media it captures. Therefore, the PRNU can be regarded as a
camera fingerprint and used for source attribution. The imaging pipeline in a
camera, however, involves various processing steps that are detrimental to PRNU
estimation. In the context of photographic images, these challenges are
successfully addressed and the method for estimating a sensor's PRNU pattern is
well established. However, various additional challenges related to generation
of videos remain largely untackled. With this perspective, this work introduces
methods to mitigate disruptive effects of widely deployed H.264 and H.265 video
compression standards on PRNU estimation. Our approach involves an intervention
in the decoding process to eliminate a filtering procedure applied at the
decoder to reduce blockiness. It also utilizes decoding parameters to develop a
weighting scheme and adjust the contribution of video frames at the macroblock
level to PRNU estimation process. Results obtained on videos captured by 28
cameras show that our approach increases the PRNU matching metric up to more
than five times over the conventional estimation method tailored for photos
- …