14 research outputs found

    Long-Term Time-Sensitive Costs for CRF-Based Tracking by Detection

    Get PDF
    We present a Conditional Random Field (CRF) approach to tracking-by-detection in which we model pairwise factors linking pairs of detections and their hidden labels, as well as higher order potentials defined in terms of label costs. Our method considers long-term connectivity between pairs of detections and models cue similarities as well as dissimilarities between them using time-interval sensitive models. In addition to position, color, and visual motion cues, we investigate in this paper the use of SURF cue as structure representations. We take advantage of the MOTChallenge 2016 to refine our tracking models, evaluate our system, and study the impact of different parameters of our tracking system on performance

    Exploiting Long-Term Connectivity and Visual Motion in CRF-based Multi-Person Tracking

    Get PDF
    We present a Conditional Random Field (CRF) approach to tracking-by-detection in which we model pairwise factors linking pairs of detections and their hidden labels, as well as higher order potentials defined in terms of label costs. To the contrary of previous works, our method considers long-term connectivity between pairs of detections and models similarities as well as dissimilarities between them, based on position, color and as novelty, visual motion cues. We introduce a set of feature-specific confidence scores which aim at weighting feature contributions according to their reliability. Pairwise potential parameters are then learned in an unsupervised way from detections or from tracklets. Label costs are defined so as to penalize the complexity of the labeling, based on prior knowledge about the scene, e.g. about the location of entry/exit zones. Experiments on PETS 2009, TUD and CAVIAR datasets show the validity of our approach, and similar or better performance than recent state-of-the-art algorithms

    Efficient and Accurate Tracking for Face Diarization via Periodical Detection

    Get PDF
    Face diarization, i.e. face tracking and clustering within video documents, is useful and important for video indexing and fast browsing but it is also a difficult and time consuming task. In this paper, we address the tracking aspect and propose a novel algorithm with two main contributions. First, we propose an approach that leverages state-of-the-art deformable part-based model (DPM) face detector with a multi-cue discriminant tracking-by-detection framework that relies on automatically learned long-term time-interval sensitive association costs specific to each document type. Secondly to improve performance, we propose an explicit false alarm removal step at the track level to efficiently filter out wrong detections (and resulting tracks). Altogether, the method is able to skip frames, i.e. process only 3 to 4 frames per second - thus cutting down computational cost - while performing better than state-of-the-art methods as evaluated on three public benchmarks from different context including a movie and broadcast data

    Combined estimation of location and body pose in surveillance video

    No full text
    In surveillance videos, cues such as head or body pose provide important information for analyzing people’s behavior and interactions. In this paper we propose an approach that jointly estimates body location and body pose in monocular surveillance video. Our approach is based on tracks derived by multi-object tracking. First, body pose classification is conducted using sparse representation technique on each frame of the tracks, generating (noisy) observation on body poses. Then, both location and body pose in 3D space are estimated jointly in a particle filtering framework by utilizing a soft coupling of body pose with the movement. The experiments show that the proposed system successfully tracks body position and pose simultaneously in many scenarios. The output of the system can be used to perform further analysis on behaviors and interactions. 1

    A Joint Estimation of Head and Body Orientation Cues in Surveillance Video

    Get PDF
    The automatic analysis and understanding of behavior and interactions is a crucial task in the design of socially intelligent video surveillance systems. Such an analysis often relies on the extraction of people behavioral cues, amongst which body pose and head pose are probably the most important ones. In this paper, we propose an approach that jointly estimates these two cues from surveillance video. Given a human track, our algorithm works in two steps. First, a per-frame analysis is conducted, in which the head is localized, head and body features are extracted, and their likelihoods under different poses is evaluated. These likelihoods are then fused within a temporal filtering framework that jointly estimate the body position, body pose and head pose by taking advantage of the soft couplings between body position (movement direction), body pose and head pose. Quantitative as well as qualitative experiments show the benefit of several aspects of our approach and in particular the benefit of the joint estimation framework for tracking the behavior cues. Further analysis of behavior and interaction could then be conducted based on the output of our system. 1
    corecore