33 research outputs found

    Multi-Views Tracking Within and Across Uncalibrated Camera Streams

    No full text
    This paper presents novel approaches for continuous detection and tracking of moving objects observed by multiple, stationary or moving cameras. Stationary video streams are registered using a ground plane homography and the trajectories derived by Tensor Voting formalism are integrated across cameras by a spatio-temporal homography. Tensor Voting based tracking approach provides smooth and continuous trajectories and bounding boxes, ensuring minimum registration error. In the more general case of moving cameras, we present an approach for integrating objects trajectories across cameras by simultaneous processing of video streams. The detection of moving objects from moving camera is performed by defining an adaptive background model that uses an affine-based camera motion approximation. Relative motion between cameras is approximated by a combination of affine and perspective transform while objects ’ dynamics are modeled by a Kalman Filter. Shape and appearance of moving objects are also taken into account using a probabilistic framework. The maximization of the joint probability model allows tracking moving objects across the cameras. We demonstrate the performances of the proposed approaches on several video surveillance sequences

    Object reacquisition using invariant appearance model

    No full text
    We present an approach for reacquisition of detected moving objects. We address the tracking problem by modeling the appearance of the moving region using stochastic models. The appearance of the object is described by multiple models representing spatial distributions of objects’ colors and edges. This representation is invariant to 2D rigid and scale transformation. It provides a good description of the object being tracked, and produces an efficient blob similarity measure for tracking. Three different similarity measures are proposed, and compared to show the performance of each model. The proposed appearance model allows to track a large number of moving people with partial and total occlusions and permits to reacquire objects that have been previously tracked. We demonstrate the performance of the system on several real video surveillance sequences. 1

    Soccer Player Tracking across Uncalibrated Camera Streams

    No full text
    This paper presents a novel approach for continuous detection and tracking of moving objects observed by multiple stationary cameras. We address the tracking problem by simultaneously modeling motion and appearance of the moving objects. The objects appearance is represented using color distribution model invariant to 2D rigid and scale transformation. It provides an efficient blobs similarity measure for tracking. The motion models are obtained using a Kalman Filter (KF) process, which predicts the position of the moving object in 2D and 3D. The tracking is performed by the maximization of a joint probability model reflecting objects motion and appearance. The novelty of our approach consists in integrating multiple cues and multiple views in a JPDAF for tracking a large number of moving people with partial and total occlusions. We demonstrate the performances of the proposed method on a soccer game captured by two stationary cameras. 1.

    Asian Conference on Computer Vision (ACCV) January 2004, Jeju Island, Korea TRACKING PEOPLE IN CROWDED SCENES ACROSS MULTIPLE CAMERAS

    No full text
    We present a novel approach for continuous detection and tracking of moving objects observed by multiple stationary cameras. We address the tracking problem by simultaneously modeling motion and appearance of the moving objects. The object’s appearance is represented using color distribution model invariant to 2D rigid and scale transformation. It provides an efficient blob similarity measure for tracking. The motion models are obtained using a Kalman Filter process, which predicts the position of the moving object in both 2D and 3D. The tracking is performed by the maximization of a joint probability model reflecting objects ’ motion and appearance. The novelty of our approach consists in integrating multiple cues and multiple views in a Joint Probability Data Association Filter for tracking a large number of moving people with partial and total occlusions. We demonstrate the performance of the proposed method on a soccer game captured by two stationary cameras. 1

    Detection and Tracking of Moving Objects from a Moving Platform in Presence of Strong Parallax

    No full text
    We present a novel approach to detect and track independently moving regions in a 3D scene observed by a moving camera in the presence of strong parallax. Detected moving pixels are classified into independently moving regions or parallax regions by analyzing two geometric constraints: the commonly used epipolar constraint, and the structure consistency constraint. The second constraint is implemented within a "Plane+Parallax" framework and represented by a bilinear relationship which relates the image points to their relative depths. This newly derived relationship is related to trilinear tensor, but can be enforced into more than three frames. It does not assume a constant reference plane in the scene and therefore eliminates the need for manual selection of reference plane. Then, a robust parallax filtering scheme is proposed to accumulate the geometric constraint errors within a sliding window and estimate a likelihood map for pixel classification. The likelihood map is integrated into our tracking framework based on the spatio-temporal Joint Probability Data Association Filter (JPDAF). This tracking approach infers the trajectory and bounding box of the moving objects by searching the optimal path with maximum joint probability within a fixed size of buffer. We demonstrate the performance of the proposed approach on real video sequences where parallax effects are significant

    Detecting motion regions in presence of strong parallax from a moving camera by multi-view geometric constraints

    No full text
    Abstract — We present a method for detecting motion regions in video sequences observed by a moving camera, in the presence of strong parallax due to static 3D structures. The proposed method classifies each image pixel into planar background, parallax or motion regions by sequentially applying 2D planar homographies, the epipolar constraint and a novel geometric constraint, called “structure consistency constraint”. The structure consistency constraint is the main contribution of this paper, and is derived from the relative camera poses in three consecutive frames and is implemented within the “Plane+Parallax ” framework. Unlike previous planar-parallax constraints proposed in the literature, the structure consistency constraint does not require the reference plane to be constant across multiple views. It directly measures the inconsistency between the projective structures from the same point under camera motion and reference plane change. The structure consistency constraint is capable of detecting moving objects followed by a moving camera in the same direction, a so called degenerate configuration where the epipolar constraint fails. We demonstrate the effectiveness and robustness of our method with experimental results on real-world video sequences. Index Terms — Motion detection, multiple view geometry, epipolar constraint, plane plus parallax I
    corecore