361 research outputs found
Automatic Bootstrapping and Tracking of Object Contours
This work introduces a new fully automatic object tracking and segmentation framework. The framework consists of a motion based bootstrapping algorithm concurrent to a shape based active contour. The shape based active contour uses a finite shape memory that is automatically and continuously built from both the bootstrap process and the active contour object tracker. A scheme is proposed to ensure the finite shape memory is continuously updated but forgets unnecessary information. Two new ways of automatically extracting shape information from image data given a region of interest are also proposed. Results demonstrate that the bootstrapping stage provides important motion and shape information to the object tracker
Towards Benchmarking Scene Background Initialization
Given a set of images of a scene taken at different times, the availability
of an initial background model that describes the scene without foreground
objects is the prerequisite for a wide range of applications, ranging from
video surveillance to computational photography. Even though several methods
have been proposed for scene background initialization, the lack of a common
groundtruthed dataset and of a common set of metrics makes it difficult to
compare their performance. To move first steps towards an easy and fair
comparison of these methods, we assembled a dataset of sequences frequently
adopted for background initialization, selected or created ground truths for
quantitative evaluation through a selected suite of metrics, and compared
results obtained by some existing methods, making all the material publicly
available.Comment: 6 pages, SBI dataset, SBMI2015 Worksho
Recommended from our members
Moving Object Detection using Adaptive Blind Update and RGB-D Camera
A novel background subtraction approach using RGB-D camera and an adaptive blind updating policy is introduced. This method in initialization creates a model to store background pixels to compare each pixel of the new frame with the model in the same location to identify background pixels. The background-model update presented in this paper uses regular and blind update which also has a different criteria from existing methods. In particular, blind update frequently changes based on the background changes and the speed of moving object. This will allow the scene model to adapt to the changes in the background, detecting the stationary moving object and reducing the ghost phenomenon. In addition, proposed bootstrapping segmentation and shadow detection are added to the system to improve the accuracy of the algorithm in shadow and depth camouflage scenarios. The proposed method is compared with the original method and other state of the art algorithms. Experimental results show significant improvement in those videos that stationary object appear. In addition, the benchmark results also indicate strong and stable results compared to the other state of the art algorithms.10.13039/501100007914-Brunel University London
Cooperative multitarget tracking with efficient split and merge handling
Copyright © 2006 IEEEFor applications such as behavior recognition it is important to maintain the identity of multiple targets, while tracking them in the presence of splits and merges, or occlusion of the targets by background obstacles. Here we propose an algorithm to handle multiple splits and merges of objects based on dynamic programming and a new geometric shape matching measure. We then cooperatively combine Kalman filter-based motion and shape tracking with the efficient and novel geometric shape matching algorithm. The system is fully automatic and requires no manual input of any kind for initialization of tracking. The target track initialization problem is formulated as computation of shortest paths in a directed and attributed graph using Dijkstra's shortest path algorithm. This scheme correctly initializes multiple target tracks for tracking even in the presence of clutter and segmentation errors which may occur in detecting a target. We present results on a large number of real world image sequences, where upto 17 objects have been tracked simultaneously in real-time, despite clutter, splits, and merges in measurements of objects. The complete tracking system including segmentation of moving objects works at 25 Hz on 352times288 pixel color image sequences on a 2.8-GHz Pentium-4 workstationPankaj Kumar, Surendra Ranganath, Kuntal Sengupta, and Huang Weimi
Background Subtraction in Video Surveillance
The aim of thesis is the real-time detection of moving and unconstrained surveillance environments monitored with static cameras. This is achieved based on the results provided by background subtraction. For this task, Gaussian Mixture Models (GMMs) and Kernel density estimation (KDE) are used. A thorough review of state-of-the-art formulations for the use of GMMs and KDE in the task of background subtraction reveals some further development opportunities, which are tackled in a novel GMM-based approach incorporating a variance controlling scheme. The proposed approach method is for parametric and non-parametric and gives us the better method for background subtraction, with more accuracy and easier parametrization of the models, for different environments. It also converges to more accurate models of the scenes. The detection of moving objects is achieved by using the results of background subtraction. For the detection of new static objects, two background models, learning at different rates, are used. This allows for a multi-class pixel classification, which follows the temporality of the changes detected by means of background subtraction. In a first approach, the subtraction of background models is done for parametric model and their results are shown. The second approach is for non-parametric models, where background subtraction is done using KDE non-parametric model. Furthermore, we have done some video engineering, where the background subtraction algorithm was employed so that, the background from one video and the foreground from another video are merged to form a new video. By doing this way, we can also do more complex video engineering with multiple videos. Finally, the results provided by region analysis can be used to improve the quality of the background models, therefore, considerably improving the detection results
Rejection based multipath reconstruction for background estimation in video sequences with stationary objects
This is the author’s version of a work that was accepted for publication in Computer Vision and Image Understanding. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Computer Vision and Image Understanding, VOL147 (2016) DOI 10.1016/j.cviu.2016.03.012Background estimation in video consists in extracting a foreground-free image from a set of training frames. Moving and stationary objects may affect the background visibility, thus invalidating the assumption of many related literature where background is the temporal dominant data. In this paper, we present a temporal-spatial block-level approach for background estimation in video to cope with moving and stationary objects. First, a Temporal Analysis module obtains a compact representation of the training data by motion filtering and dimensionality reduction. Then, a threshold-free hierarchical clustering determines a set of candidates to represent the background for each spatial location (block). Second, a Spatial Analysis module iteratively reconstructs the background using these candidates. For each spatial location, multiple reconstruction hypotheses (paths) are explored to obtain its neighboring locations by enforcing inter-block similarities and intra-block homogeneity constraints in terms of color discontinuity, color dissimilarity and variability. The experimental results show that the proposed approach outperforms the related state-of-the-art over challenging video sequences in presence of moving and stationary objects.This work was partially supported by the Spanish Government (HAVideo, TEC2014-53176-R) and by the TEC department (Universidad Autónoma de Madrid)
- …