22,169 research outputs found

    tracker independent drift detection and correction using segmented objects and features

    Get PDF
    Object tracking has been an active research topic in the field of video processing. However, automated object tracking, under uncontrolled environments, is still difficult to achieve and encounters various challenges that cause the tracker to drift away from the target object. %Object tracking methods with fixed models, that are predefined prior to the tracking task, normally fail because of the inevitable appearance changes that can be either object or environment-related. To effectively handle object or environment tracking challenges, recent powerful tracking approaches are learning-based, meaning they learn object appearance changes while tracking online. The output of such trackers is, however, limited to a bounding box representation, the center of which is considered as the estimated object location. Such bounding box may not provide accurate foreground/background discrimination and may not handle highly non-rigid objects. Moreover, the bounding box may not surround the object completely, or it may not be centered around it, which affects the accuracy of the overall tracking process. Our main objective in this work is to reduce drifts of state-of-the-art tracking algorithms (trackers) using object segmentation so to produce more accurate bounding box. To enhance the quality of state-of-the-art trackers, this work investigates two main venues: first tracker-independent drift detection and correction using object features and second, selection of best performing parameters of Graph Cut object segmentation and of support vector machines using artificial immune system. In addition, this work proposes a framework for the evaluation and ranking of different trackers using easily interpretable performance measures, in a way to account for the presence of outliers. For tracker-independent drift detection, we use saliency features or objectness using saliency, the ratio of the salient region corresponding to the target object with respect to the estimated bounding box is used to indicate the occurrence of tracking drift with no prior information about the target model. With objectness measures, we use both relative area and score of the detected candidate boxes according to the objectness measure to indicate the occurrenece of the tracking drift. For drift correction, we investigate the application of object segmentation on the estimated bounding box to re-locate it around the target object. Due to its ability to lead to a global near optimal solution, we use the Graph Cut object segmentation method. We modify the Graph Cut model to incorporate an automatic seed selection module based on interest points, in addition to a template mask, to automatically initialize the segmentation across frames. However, the integration of segmentation in the tracking loop has its computational burden. In addition, the segmentation quality might be affected by tracking challenges, such as motion blur and occlusion. Accordingly, object segmentation is applied only when a drift is detected. Simulation results show that the proposed approach improves the tracking quality of five recent trackers. Researchers often use long and tedious trial and error approaches for determining the best performing parameter configuration of a video-processing algorithm, particularly with the diverse nature of video sequences. However, such configuration does not guarantee the best performance. A little research attention has been given to study the algorithm's sensitivity to its parameters. Artificial immune system is an emergent biologically motivated computing paradigm that has the ability to reach optimal or near-optimal solutions through mutation and cloning. This work proposes the use of artificial immune system for the selection of best performing parameters of two video processing algorithms: support vector machines for object tracking and Graph Cut based object segmentation. An increasing number of trackers are being developed and when introducing a new tracker, it is important to facilitate its evaluation and ranking in relation to others, using easy to interpret performance measures. Recent studies have shown that some measures are correlated and cannot reflect the different aspects of tracking performance when used individually. In addition, they do not incorporate robust statistics to account for the presence of outliers that might lead to insignificant results. This work proposes a framework for effective scoring and ranking of different trackers by using less correlated quality metrics, coupled with a robust estimator against dispersion. In addition, a unified performance index is proposed to facilitate the evaluation process

    ROAM: a Rich Object Appearance Model with Application to Rotoscoping

    Get PDF
    Rotoscoping, the detailed delineation of scene elements through a video shot, is a painstaking task of tremendous importance in professional post-production pipelines. While pixel-wise segmentation techniques can help for this task, professional rotoscoping tools rely on parametric curves that offer the artists a much better interactive control on the definition, editing and manipulation of the segments of interest. Sticking to this prevalent rotoscoping paradigm, we propose a novel framework to capture and track the visual aspect of an arbitrary object in a scene, given a first closed outline of this object. This model combines a collection of local foreground/background appearance models spread along the outline, a global appearance model of the enclosed object and a set of distinctive foreground landmarks. The structure of this rich appearance model allows simple initialization, efficient iterative optimization with exact minimization at each step, and on-line adaptation in videos. We demonstrate qualitatively and quantitatively the merit of this framework through comparisons with tools based on either dynamic segmentation with a closed curve or pixel-wise binary labelling

    Interaction between high-level and low-level image analysis for semantic video object extraction

    Get PDF
    Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)

    A Multi-cut Formulation for Joint Segmentation and Tracking of Multiple Objects

    Full text link
    Recently, Minimum Cost Multicut Formulations have been proposed and proven to be successful in both motion trajectory segmentation and multi-target tracking scenarios. Both tasks benefit from decomposing a graphical model into an optimal number of connected components based on attractive and repulsive pairwise terms. The two tasks are formulated on different levels of granularity and, accordingly, leverage mostly local information for motion segmentation and mostly high-level information for multi-target tracking. In this paper we argue that point trajectories and their local relationships can contribute to the high-level task of multi-target tracking and also argue that high-level cues from object detection and tracking are helpful to solve motion segmentation. We propose a joint graphical model for point trajectories and object detections whose Multicuts are solutions to motion segmentation {\it and} multi-target tracking problems at once. Results on the FBMS59 motion segmentation benchmark as well as on pedestrian tracking sequences from the 2D MOT 2015 benchmark demonstrate the promise of this joint approach
    • …
    corecore