59 research outputs found

    A Universal Update-pacing Framework For Visual Tracking

    Full text link
    This paper proposes a novel framework to alleviate the model drift problem in visual tracking, which is based on paced updates and trajectory selection. Given a base tracker, an ensemble of trackers is generated, in which each tracker's update behavior will be paced and then traces the target object forward and backward to generate a pair of trajectories in an interval. Then, we implicitly perform self-examination based on trajectory pair of each tracker and select the most robust tracker. The proposed framework can effectively leverage temporal context of sequential frames and avoid to learn corrupted information. Extensive experiments on the standard benchmark suggest that the proposed framework achieves superior performance against state-of-the-art trackers.Comment: Submitted to ICIP 201

    Robust Visual Tracking by Exploiting the Historical Tracker Snapshots

    Full text link
    © 2015 IEEE. Variations of target appearances due to illumination changes, heavy occlusions and abrupt motions are the major factors for tracking failures. In this paper, we show that these failures can be effectively handled by exploiting the trajectory consistency between the current tracker and its historical trained snapshots. Here, we propose a Scale-adaptive Multi-Expert (SME) tracker, which combines the current tracker and its historical trained snapshots to construct a multi-expert ensemble. The best expert in the ensemble is then selected according to the accumulated trajectory consistency criteria. The base tracker estimates the translation accurately with regression based correlation filter, and an effective scale adaptive scheme is introduced to handle scale changes on-the-fly. SME is extensively evaluated on the 51 sequences tracking benchmark and VOT2015 dataset. The experimental results demonstrate the excellent performance of the proposed approach against state-of-the-art methods with real-time speed

    Map-enhanced visual taxiway extraction for autonomous taxiing of UAVs

    Get PDF
    In this paper, a map-enhanced method is proposed for vision-based taxiway centreline extraction, which is a prerequisite of autonomous visual navigation systems for unmanned aerial vehicles. Comparing with other sensors, cameras are able to provide richer information. Consequently, vision based navigations have been intensively studied in the recent two decades and computer vision techniques are shown to be capable of dealing with various problems in applications. However, there are signi cant drawbacks associated with these computer vision techniques that the accuracy and robustness may not meet the required standard in some application scenarios. In this paper, a taxiway map is incorporated into the analysis as prior knowledge to improve on the vehicle localisation and vision based centreline extraction. We develop a map updating algorithm so that the traditional map is able to adapt to the dynamic environment via Bayesian learning. The developed method is illustrated using a simulation study

    Emerging New Trends in Hybrid Vehicle Localization Systems

    Get PDF

    Multi-target pig tracking algorithm based on joint probability data association and particle filter

    Get PDF
    In order to evaluate the health status of pigs in time, monitor accurately the disease dynamics of live pigs, and reduce the morbidity and mortality of pigs in the existing large-scale farming model, pig detection and tracking technology based on machine vision are used to monitor the behavior of pigs. However, it is challenging to efficiently detect and track pigs with noise caused by occlusion and interaction between targets. In view of the actual breeding conditions of pigs and the limitations of existing behavior monitoring technology of an individual pig, this study proposed a method that used color feature, target centroid and the minimum circumscribed rectangle length-width ratio as the features to build a multi-target tracking algorithm, which based on joint probability data association and particle filter. Experimental results show the proposed algorithm can quickly and accurately track pigs in the video, and it is able to cope with partial occlusions and recover the tracks after temporary loss

    Adaptive Framework for Robust Visual Tracking

    Get PDF
    Visual tracking is a difficult and challenging problem, for numerous reasons such as small object size, pose angle variations, occlusion, and camera motion. Object tracking has many real-world applications such as surveillance systems, moving organs in medical imaging, and robotics. Traditional tracking methods lack a recovery mechanism that can be used in situations when the tracked objects drift away from ground truth. In this paper, we propose a novel framework for tracking moving objects based on a composite framework and a reporter mechanism. The composite framework tracks moving objects using different trackers and produces pairs of forward/backward tracklets. A robustness score is then calculated for each tracker using its forward/backward tracklet pair to find the most reliable moving object trajectory. The reporter serves as the recovery mechanism to correct the moving object trajectory when the robustness score is very low, mainly using a combination of particle filter and template matching. The proposed framework can handle partial and heavy occlusions; moreover, the structure of the framework enables integration of other user-specific trackers. Extensive experiments on recent benchmarks show that the proposed framework outperforms other current state-of-the-art trackers due to its powerful trajectory analysis and recovery mechanism; the framework improved the area under the curve from 68% to 70.8% on OTB-100 benchmark

    Dependent multiple cue integration for robust tracking

    Get PDF
    We propose a new technique for fusing multiple cues to robustly segment an object from its background in video sequences that suffer from abrupt changes of both illumination and position of the target. Robustness is achieved by the integration of appearance and geometric object features and by their estimation using Bayesian filters, such as Kalman or particle filters. In particular, each filter estimates the state of a specific object feature, conditionally dependent on another feature estimated by a distinct filter. This dependence provides improved target representations, permitting us to segment it out from the background even in nonstationary sequences. Considering that the procedure of the Bayesian filters may be described by a "hypotheses generation-hypotheses correction" strategy, the major novelty of our methodology compared to previous approaches is that the mutual dependence between filters is considered during the feature observation, that is, into the "hypotheses-correction" stage, instead of considering it when generating the hypotheses. This proves to be much more effective in terms of accuracy and reliability. The proposed method is analytically justified and applied to develop a robust tracking system that adapts online and simultaneously the color space where the image points are represented, the color distributions, the contour of the object, and its bounding box. Results with synthetic data and real video sequences demonstrate the robustness and versatility of our method.Peer Reviewe
    • …
    corecore