18 research outputs found

    Staple: Complementary Learners for Real-Time Tracking

    Full text link
    Correlation Filter-based trackers have recently achieved excellent performance, showing great robustness to challenging situations exhibiting motion blur and illumination changes. However, since the model that they learn depends strongly on the spatial layout of the tracked object, they are notoriously sensitive to deformation. Models based on colour statistics have complementary traits: they cope well with variation in shape, but suffer when illumination is not consistent throughout a sequence. Moreover, colour distributions alone can be insufficiently discriminative. In this paper, we show that a simple tracker combining complementary cues in a ridge regression framework can operate faster than 80 FPS and outperform not only all entries in the popular VOT14 competition, but also recent and far more sophisticated trackers according to multiple benchmarks.Comment: To appear in CVPR 201

    An improved dynamic graph tracking algorithm

    Get PDF
    We propose several improvements of an existing baseline short-term visual tracking algorithm. The baseline tracker applies a dynamic graph representation to track the target. The target local parts are used as nodes in the graph, while the connections between neighboring parts represent the graph edges. This flexible model %representation of the target structure proves useful in the presence of extensive target visual changes throughout the sequence. A recent benchmark has shown that the tracker compares favorably in performance with other state-of-the-art trackers, with a notable weakness in cases of input sequences with high variance in scene and object lighting. We have performed an in-depth analysis of the tracker and propose a list of improvements. With respect to an unstable component in the tracker implementation of the foreground/background image segmentation, we propose an improvement which boosts the accuracy in cases of rapid illumination change of the target. We also propose a dynamic adjustment of the aforementioned segmentation with respect to the size of the resulting foreground, which improves tracking reliability and reduces the number of tracking failures. The implemented improvements are analyzed on the VOT2015 benchmark. Fixing the unstable component yields improvements in cases of rapid illumination change and reduces failure rate, while the dynamic segmentation adjustment improves tracking accuracy and robustness in the vast majority of cases, barring rapid illumination change

    An improved dynamic graph tracking algorithm

    Get PDF
    We propose several improvements of an existing baseline short-term visual tracking algorithm. The baseline tracker applies a dynamic graph representation to track the target. The target local parts are used as nodes in the graph, while the connections between neighboring parts represent the graph edges. This flexible model %representation of the target structure proves useful in the presence of extensive target visual changes throughout the sequence. A recent benchmark has shown that the tracker compares favorably in performance with other state-of-the-art trackers, with a notable weakness in cases of input sequences with high variance in scene and object lighting. We have performed an in-depth analysis of the tracker and propose a list of improvements. With respect to an unstable component in the tracker implementation of the foreground/background image segmentation, we propose an improvement which boosts the accuracy in cases of rapid illumination change of the target. We also propose a dynamic adjustment of the aforementioned segmentation with respect to the size of the resulting foreground, which improves tracking reliability and reduces the number of tracking failures. The implemented improvements are analyzed on the VOT2015 benchmark. Fixing the unstable component yields improvements in cases of rapid illumination change and reduces failure rate, while the dynamic segmentation adjustment improves tracking accuracy and robustness in the vast majority of cases, barring rapid illumination change

    Deformable Object Tracking with Gated Fusion

    Full text link
    The tracking-by-detection framework receives growing attentions through the integration with the Convolutional Neural Networks (CNNs). Existing tracking-by-detection based methods, however, fail to track objects with severe appearance variations. This is because the traditional convolutional operation is performed on fixed grids, and thus may not be able to find the correct response while the object is changing pose or under varying environmental conditions. In this paper, we propose a deformable convolution layer to enrich the target appearance representations in the tracking-by-detection framework. We aim to capture the target appearance variations via deformable convolution, which adaptively enhances its original features. In addition, we also propose a gated fusion scheme to control how the variations captured by the deformable convolution affect the original appearance. The enriched feature representation through deformable convolution facilitates the discrimination of the CNN classifier on the target object and background. Extensive experiments on the standard benchmarks show that the proposed tracker performs favorably against state-of-the-art methods

    Deformable Object Tracking Using Clustering and Particle Filter

    Get PDF
    Visual tracking of a deformable object is a challenging problem, as the target object frequently changes its attributes like shape, posture, color and so on. In this work, we propose a model-free tracker using clustering to track a target object which poses deformations and rotations. Clustering is applied to segment the tracked object into several independent components and the discriminative parts are tracked to locate the object. The proposed technique segments the target object into independent components using data clustering techniques and then tracks by finding corresponding clusters. Particle filters method is incorporated to improve the accuracy of the proposed technique. Experiments are carried out with several standard data sets, and results demonstrate comparable performance to the state-of-the-art visual tracking methods

    RGB-D Tracking and Optimal Perception of Deformable Objects

    Get PDF
    Addressing the perception problem of texture-less objects that undergo large deformations and movements, this article presents a novel RGB-D learning-free deformable object tracker in combination with a camera position optimisation system for optimal deformable object perception. The approach is based on the discretisation of the object''s visible area through the generation of a supervoxel graph that allows weighting new supervoxel candidates between object states over time. Once a deformation state of the object is determined, supervoxels of its associated graph serve as input for the camera position optimisation problem. Satisfactory results have been obtained in real time with a variety of objects that present different deformation characteristics
    corecore