61,126 research outputs found

    Thermo-visual feature fusion for object tracking using multiple spatiogram trackers

    Get PDF
    In this paper, we propose a framework that can efficiently combine features for robust tracking based on fusing the outputs of multiple spatiogram trackers. This is achieved without the exponential increase in storage and processing that other multimodal tracking approaches suffer from. The framework allows the features to be split arbitrarily between the trackers, as well as providing the flexibility to add, remove or dynamically weight features. We derive a mean-shift type algorithm for the framework that allows efficient object tracking with very low computational overhead. We especially target the fusion of thermal infrared and visible spectrum features as the most useful features for automated surveillance applications. Results are shown on multimodal video sequences clearly illustrating the benefits of combining multiple features using our framework

    Object Tracking using Statistic-based Feature Fusion Technique

    Get PDF
    Object tracking in wide area motion imagery (WAMI) is challenging because of many factors including small target sizes, viewpoint changes, object rotation, occlusions, and shadows. One method to overcome these challenges is to fuse multiple features of different types to obtain a robust understanding of the object. For multi-feature fusion based tracking applications, the weighting of the features will highly affect the outcome. While obtaining a constant weighting scheme based on training sequences is possible, an adaptive method may better utilize the features. An adaptive weighting scheme should favor the most discerning features in the previous frames. A known way to determine a feature’s ability to discern the target from the background is based on statistics analysis. We propose to use the statistics-based fusion method to better utilize rotation invariant based features to track objects. The effectiveness of the fusion method will be compared to a constant weighting scheme on eight sequences in two WAMI datasets.https://ecommons.udayton.edu/stander_posters/1808/thumbnail.jp

    Multiple video object tracking using variational inference

    Get PDF
    In this article a Bayesian filter approximation is proposed for simultaneous multiple target detection and tracking and then applied for object detection on video from moving camera. The inference uses the evidence lower bound optimisation for Gaussian mixtures. The proposed filter is capable of real time data processing and may be used as a basis for data fusion. The method we propose was tested on the video with dynamic background,where the velocity with respect to the background is used to discriminate the objects. The framework does not depend on the feature space, that means that different feature spaces can be unrestrictedly used while preserving the structure of the filter

    Robust Correlation Tracking for UAV Videos via Feature Fusion and Saliency Proposals

    Get PDF
    Following the growing availability of low-cost, commercially available unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on object tracking using videos recorded from UAVs. However, tracking from UAV videos poses many challenges due to platform motion, including background clutter, occlusion, and illumination variation. This paper tackles these challenges by proposing a correlation filter-based tracker with feature fusion and saliency proposals. First, we integrate multiple feature types such as dimensionality-reduced color name (CN) and histograms of oriented gradient (HOG) features to improve the performance of correlation filters for UAV videos. Yet, a fused feature acting as a multivector descriptor cannot be directly used in prior correlation filters. Therefore, a fused feature correlation filter is proposed that can directly convolve with a multivector descriptor, in order to obtain a single-channel response that indicates the location of an object. Furthermore, we introduce saliency proposals as re-detector to reduce background interference caused by occlusion or any distracter. Finally, an adaptive template-update strategy according to saliency information is utilized to alleviate possible model drifts. Systematic comparative evaluations performed on two popular UAV datasets show the effectiveness of the proposed approach

    Efficient multi-view multi-target tracking using a distributed camera network

    Get PDF
    In this paper, we propose a multi-target tracking method using a distributed camera network, which can effectively handle the occlusion and reidenfication problems by combining advanced deep learning and distributed information fusion. The targets are first detected using a fast object detection method based on deep learning. We then combine the deep visual feature information and spatial trajectory information in the Hungarian algorithm for robust targets association. The deep visual feature information is extracted from a convolutional neural network, which is pre-trained using a large-scale person reidentification dataset. The spatial trajectories of multiple targets in our framework are derived from a multiple view information fusion method, which employs an information weighted consensus filter for fusion and tracking. In addition, we also propose an efficient track processing method for ID assignment using multiple view information. The experiments on public datasets show that the proposed method is robust to solve the occlusion problem and reidentification problem, and can achieve superior performance compared to the state of the art methods

    Deformable Object Tracking with Gated Fusion

    Full text link
    The tracking-by-detection framework receives growing attentions through the integration with the Convolutional Neural Networks (CNNs). Existing tracking-by-detection based methods, however, fail to track objects with severe appearance variations. This is because the traditional convolutional operation is performed on fixed grids, and thus may not be able to find the correct response while the object is changing pose or under varying environmental conditions. In this paper, we propose a deformable convolution layer to enrich the target appearance representations in the tracking-by-detection framework. We aim to capture the target appearance variations via deformable convolution, which adaptively enhances its original features. In addition, we also propose a gated fusion scheme to control how the variations captured by the deformable convolution affect the original appearance. The enriched feature representation through deformable convolution facilitates the discrimination of the CNN classifier on the target object and background. Extensive experiments on the standard benchmarks show that the proposed tracker performs favorably against state-of-the-art methods

    Revisiting Color-Event based Tracking: A Unified Network, Dataset, and Metric

    Full text link
    Combining the Color and Event cameras (also called Dynamic Vision Sensors, DVS) for robust object tracking is a newly emerging research topic in recent years. Existing color-event tracking framework usually contains multiple scattered modules which may lead to low efficiency and high computational complexity, including feature extraction, fusion, matching, interactive learning, etc. In this paper, we propose a single-stage backbone network for Color-Event Unified Tracking (CEUTrack), which achieves the above functions simultaneously. Given the event points and RGB frames, we first transform the points into voxels and crop the template and search regions for both modalities, respectively. Then, these regions are projected into tokens and parallelly fed into the unified Transformer backbone network. The output features will be fed into a tracking head for target object localization. Our proposed CEUTrack is simple, effective, and efficient, which achieves over 75 FPS and new SOTA performance. To better validate the effectiveness of our model and address the data deficiency of this task, we also propose a generic and large-scale benchmark dataset for color-event tracking, termed COESOT, which contains 90 categories and 1354 video sequences. Additionally, a new evaluation metric named BOC is proposed in our evaluation toolkit to evaluate the prominence with respect to the baseline methods. We hope the newly proposed method, dataset, and evaluation metric provide a better platform for color-event-based tracking. The dataset, toolkit, and source code will be released on: \url{https://github.com/Event-AHU/COESOT}
    corecore