1,900 research outputs found

    Online Domain Adaptation for Multi-Object Tracking

    Full text link
    Automatically detecting, labeling, and tracking objects in videos depends first and foremost on accurate category-level object detectors. These might, however, not always be available in practice, as acquiring high-quality large scale labeled training datasets is either too costly or impractical for all possible real-world application scenarios. A scalable solution consists in re-using object detectors pre-trained on generic datasets. This work is the first to investigate the problem of on-line domain adaptation of object detectors for causal multi-object tracking (MOT). We propose to alleviate the dataset bias by adapting detectors from category to instances, and back: (i) we jointly learn all target models by adapting them from the pre-trained one, and (ii) we also adapt the pre-trained model on-line. We introduce an on-line multi-task learning algorithm to efficiently share parameters and reduce drift, while gradually improving recall. Our approach is applicable to any linear object detector, and we evaluate both cheap "mini-Fisher Vectors" and expensive "off-the-shelf" ConvNet features. We quantitatively measure the benefit of our domain adaptation strategy on the KITTI tracking benchmark and on a new dataset (PASCAL-to-KITTI) we introduce to study the domain mismatch problem in MOT.Comment: To appear at BMVC 201

    Siamese Instance Search for Tracking

    Get PDF
    In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.Comment: This paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition, 201

    Hierarchical eyelid and face tracking

    Get PDF
    Most applications on Human Computer Interaction (HCI) require to extract the movements of user faces, while avoiding high memory and time expenses. Moreover, HCI systems usually use low-cost cameras, while current face tracking techniques strongly depend on the image resolution. In this paper, we tackle the problem of eyelid tracking by using Appearance-Based Models, thus achieving accurate estimations of the movements of the eyelids, while avoiding cues, which require high-resolution faces, such as edge detectors or colour information. Consequently, we can track the fast and spontaneous movements of the eyelids, a very hard task due to the small resolution of the eye regions. Subsequently, we combine the results of eyelid tracking with the estimations of other facial features, such as the eyebrows and the lips. As a result, a hierarchical tracking framework is obtained: we demonstrate that combining two appearance-based trackers allows to get accurate estimates for the eyelid, eyebrows, lips and also the 3D head pose by using low-cost video cameras and in real-time. Therefore, our approach is shown suitable to be used for further facial-expression analysis.Peer Reviewe

    Visual Object Tracking in First Person Vision

    Get PDF
    The understanding of human-object interactions is fundamental in First Person Vision (FPV). Visual tracking algorithms which follow the objects manipulated by the camera wearer can provide useful information to effectively model such interactions. In the last years, the computer vision community has significantly improved the performance of tracking algorithms for a large variety of target objects and scenarios. Despite a few previous attempts to exploit trackers in the FPV domain, a methodical analysis of the performance of state-of-the-art trackers is still missing. This research gap raises the question of whether current solutions can be used “off-the-shelf” or more domain-specific investigations should be carried out. This paper aims to provide answers to such questions. We present the first systematic investigation of single object tracking in FPV. Our study extensively analyses the performance of 42 algorithms including generic object trackers and baseline FPV-specific trackers. The analysis is carried out by focusing on different aspects of the FPV setting, introducing new performance measures, and in relation to FPV-specific tasks. The study is made possible through the introduction of TREK-150, a novel benchmark dataset composed of 150 densely annotated video sequences. Our results show that object tracking in FPV poses new challenges to current visual trackers. We highlight the factors causing such behavior and point out possible research directions. Despite their difficulties, we prove that trackers bring benefits to FPV downstream tasks requiring short-term object tracking. We expect that generic object tracking will gain popularity in FPV as new and FPV-specific methodologies are investigated

    3D Face Tracking Using Stereo Cameras with Whole Body View

    Get PDF
    All visual tracking tasks associated with people tracking are in a great demand for modern applications dedicated to make human life easier and safer. In this thesis, a special case of people tracking - 3D face tracking in whole body view video is explored. Whole body view video means that the tracked face typically occupies not more than 5-10% of the frame area. Currently there is no reliable tracker that can track a face in long-term whole body view videos with luminance cameras in the 3D space. I followed a non-classical approach to designing a 3D tracker: first a 2D face tracking algorithm was developed in one view and then extended into stereo tracking. I recorded and annotated my own extensive dataset specifically for 2D face tracking in whole body view video and evaluated 17 state of the art 2D tracking algorithms. Based on the TLD tracker, I developed a face adapted median flow tracker that shows superior results compared to state of the art generic trackers. I explored different ways of extending 2D tracking into 3D and developed a method of using the epipolar constraint to check consistency of 3D tracking results. This method allows to detect tracking failures early and improves overall 3D tracking accuracy. I demonstrated how a Kinect based method can be compared to visual tracking methods and compared four different visual tracking methods running on low resolution fisheye stereo video and the Kinect face tracking application. My main contributions are: - I developed a face adaptation of generic trackers that improves tracking performance in long-term whole body view videos. - I designed a method of using the epipolar constraint to check consistency of 3D tracking results

    A Study of Exploiting Objectness for Robust Online Object Tracking

    Get PDF
    Tracking is a fundamental problem in many computer vision applications. Despite the progress over the last decade, there still exist many challenges especially when the problem is posed in real world scenarios (e.g., cluttered background, occluded objects). Among them drifting has been widely observed to be a problem common to the class of online tracking algorithms - i.e., when challenges such as occlusion or nonlinear deformation of the object occurs, the tracker might lose the target completely in subsequent frames in an image sequence. In this work, we propose to exploit the objectness to partially alleviate the drifting problem with the class of online object tracking and verify the effectiveness of this idea by extensive experimental results. More specifically, a recently developed objectness measure was incorporated into Incremental Learning for Visual Tracking (IVT) algorithm in a principled way. We have come up with a strategy of reinitializing the training samples in the proposed approach to improve the robustness of online tracking. Experimental results show that using objectness measure does help to alleviate its drift to background for certain challenging sequences

    Is First Person Vision Challenging for Object Tracking?

    Full text link
    Understanding human-object interactions is fundamental in First Person Vision (FPV). Tracking algorithms which follow the objects manipulated by the camera wearer can provide useful cues to effectively model such interactions. Visual tracking solutions available in the computer vision literature have significantly improved their performance in the last years for a large variety of target objects and tracking scenarios. However, despite a few previous attempts to exploit trackers in FPV applications, a methodical analysis of the performance of state-of-the-art trackers in this domain is still missing. In this paper, we fill the gap by presenting the first systematic study of object tracking in FPV. Our study extensively analyses the performance of recent visual trackers and baseline FPV trackers with respect to different aspects and considering a new performance measure. This is achieved through TREK-150, a novel benchmark dataset composed of 150 densely annotated video sequences. Our results show that object tracking in FPV is challenging, which suggests that more research efforts should be devoted to this problem so that tracking could benefit FPV tasks.Comment: IEEE/CVF International Conference on Computer Vision (ICCV) 2021, Visual Object Tracking Challenge VOT2021 workshop. arXiv admin note: text overlap with arXiv:2011.1226

    Adaptive Real-Time Image Processing for Human-Computer Interaction

    Get PDF
    • …
    corecore