9 research outputs found

    Robust Correlation Tracking for UAV with Feature Integration and Response Map Enhancement

    Get PDF
    Recently, correlation filter (CF)-based tracking algorithms have attained extensive interest in the field of unmanned aerial vehicle (UAV) tracking. Nonetheless, existing trackers still struggle with selecting suitable features and alleviating the model drift issue for online UAV tracking. In this paper, a robust CF-based tracker with feature integration and response map enhancement is proposed. Concretely, we develop a novel feature integration method that comprehensively describes the target by leveraging auxiliary gradient information extracted from the binary representation. Subsequently, the integrated features are utilized to learn a background-aware correlation filter (BACF) for generating a response map that implies the target location. To mitigate the risk of model drift, we introduce saliency awareness in the BACF framework and further propose an adaptive response fusion strategy to enhance the discriminating capability of the response map. Moreover, a dynamic model update mechanism is designed to prevent filter contamination and maintain tracking stability. Experiments on three public benchmarks verify that the proposed tracker outperforms several state-of-the-art algorithms and achieves a real-time tracking speed, which can be applied in UAV tracking scenarios efficiently

    Predictive Visual Tracking: A New Benchmark and Baseline Approach

    Full text link
    As a crucial robotic perception capability, visual tracking has been intensively studied recently. In the real-world scenarios, the onboard processing time of the image streams inevitably leads to a discrepancy between the tracking results and the real-world states. However, existing visual tracking benchmarks commonly run the trackers offline and ignore such latency in the evaluation. In this work, we aim to deal with a more realistic problem of latency-aware tracking. The state-of-the-art trackers are evaluated in the aerial scenarios with new metrics jointly assessing the tracking accuracy and efficiency. Moreover, a new predictive visual tracking baseline is developed to compensate for the latency stemming from the onboard computation. Our latency-aware benchmark can provide a more realistic evaluation of the trackers for the robotic applications. Besides, exhaustive experiments have proven the effectiveness of the proposed predictive visual tracking baseline approach.Comment: 7 pages, 5 figure

    PVT++: A Simple End-to-End Latency-Aware Visual Tracking Framework

    Full text link
    Visual object tracking is an essential capability of intelligent robots. Most existing approaches have ignored the online latency that can cause severe performance degradation during real-world processing. Especially for unmanned aerial vehicle, where robust tracking is more challenging and onboard computation is limited, latency issue could be fatal. In this work, we present a simple framework for end-to-end latency-aware tracking, i.e., end-to-end predictive visual tracking (PVT++). PVT++ is capable of turning most leading-edge trackers into predictive trackers by appending an online predictor. Unlike existing solutions that use model-based approaches, our framework is learnable, such that it can take not only motion information as input but it can also take advantage of visual cues or a combination of both. Moreover, since PVT++ is end-to-end optimizable, it can further boost the latency-aware tracking performance by joint training. Additionally, this work presents an extended latency-aware evaluation benchmark for assessing an any-speed tracker in the online setting. Empirical results on robotic platform from aerial perspective show that PVT++ can achieve up to 60% performance gain on various trackers and exhibit better robustness than prior model-based solution, largely mitigating the degradation brought by latency. Code and models will be made public.Comment: 18 pages, 10 figure

    Online Vehicle Subgroup Scheduling for Feature-Level Cooperative Classification

    Get PDF
    In future transportation systems, autonomous vehicles (AVs) are expected to operate under complex driving environments without human intervention. Their ability to continuously perceive and understand the surrounding environment can be realized through the execution of a range of environment perception tasks, including object classification, object detection, and object tracking. In this thesis, we investigate an autonomous driving scenario wherein a group of AVs is tasked with tracking a common object over a period of time. To tackle issues related to the inefficient utilization of global sensing data and excessive consumption of computing resource in conventional standalone tracking, a cooperative classification scheme has been proposed as a replacement for the conventional non-cooperative classification scheme employed in the tracking process. Taking into account the dynamic nature of operating environments and the varying availability of computing resources, a subgroup scheduling problem is studied to optimize the performance of the cooperative classification scheme. Our objective is to determine the composition and role assignment of each scheduled subgroup in order to minimize the total computation demand required by the group for performing cooperative classifications, while ensuring the satisfaction of classification accuracy and delay requirements. A learning-based solution is developed, which integrates multi-armed bandit (MAB) theory with distance and line-of-sight based subgroup selection criteria, to guide subgroup scheduling decisions in the presence of randomness and uncertainties. Simulation results confirm the effectiveness of our proposed scheme and algorithm, outperforming other baseline scheme and algorithms by delivering improved classification accuracy and reduced classification delays with lower computation demand
    corecore