346 research outputs found

    Object detection and tracking benchmark in industry based on improved correlation filter

    Full text link
    Real-time object detection and tracking have shown to be the basis of intelligent production for industrial 4.0 applications. It is a challenging task because of various distorted data in complex industrial setting. The correlation filter (CF) has been used to trade off the low-cost computation and high performance. However, traditional CF training strategy can not get satisfied performance for the various industrial data; because the simple sampling(bagging) during training process will not find the exact solutions in a data space with a large diversity. In this paper, we propose Dijkstra-distance based correlation filters (DBCF), which establishes a new learning framework that embeds distribution-related constraints into the multi-channel correlation filters (MCCF). DBCF is able to handle the huge variations existing in the industrial data by improving those constraints based on the shortest path among all solutions. To evaluate DBCF, we build a new dataset as the benchmark for industrial 4.0 application. Extensive experiments demonstrate that DBCF produces high performance and exceeds the state-of-the-art methods. The dataset and source code can be found at https://github.com/bczhangbczhan

    Object Tracking in Hyperspectral Videos with Convolutional Features and Kernelized Correlation Filter

    Full text link
    Target tracking in hyperspectral videos is a new research topic. In this paper, a novel method based on convolutional network and Kernelized Correlation Filter (KCF) framework is presented for tracking objects of interest in hyperspectral videos. We extract a set of normalized three-dimensional cubes from the target region as fixed convolution filters which contain spectral information surrounding a target. The feature maps generated by convolutional operations are combined to form a three-dimensional representation of an object, thereby providing effective encoding of local spectral-spatial information. We show that a simple two-layer convolutional networks is sufficient to learn robust representations without the need of offline training with a large dataset. In the tracking step, KCF is adopted to distinguish targets from neighboring environment. Experimental results demonstrate that the proposed method performs well on sample hyperspectral videos, and outperforms several state-of-the-art methods tested on grayscale and color videos in the same scene.Comment: Accepted by ICSM 201

    Self-Selective Correlation Ship Tracking Method for Smart Ocean System

    Full text link
    In recent years, with the development of the marine industry, navigation environment becomes more complicated. Some artificial intelligence technologies, such as computer vision, can recognize, track and count the sailing ships to ensure the maritime security and facilitates the management for Smart Ocean System. Aiming at the scaling problem and boundary effect problem of traditional correlation filtering methods, we propose a self-selective correlation filtering method based on box regression (BRCF). The proposed method mainly include: 1) A self-selective model with negative samples mining method which effectively reduces the boundary effect in strengthening the classification ability of classifier at the same time; 2) A bounding box regression method combined with a key points matching method for the scale prediction, leading to a fast and efficient calculation. The experimental results show that the proposed method can effectively deal with the problem of ship size changes and background interference. The success rates and precisions were higher than Discriminative Scale Space Tracking (DSST) by over 8 percentage points on the marine traffic dataset of our laboratory. In terms of processing speed, the proposed method is higher than DSST by nearly 22 Frames Per Second (FPS)

    Object Tracking with Correlation Filters using Selective Single Background Patch

    Full text link
    Correlation filter plays a major role in improved tracking performance compared to existing trackers. The tracker uses the adaptive correlation response to predict the location of the target. Many varieties of correlation trackers were proposed recently with high accuracy and frame rates. The paper proposes a method to select a single background patch to have a better tracking performance. The paper also contributes a variant of correlation filter by modifying the filter with image restoration filters. The approach is validated using Object Tracking Benchmark sequences.Comment: 5 pages, 3 figure

    CREST: Convolutional Residual Learning for Visual Tracking

    Full text link
    Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual tracking. They only need a small set of training samples from the initial frame to generate an appearance model. However, existing DCFs learn the filters separately from feature extraction, and update these filters using a moving average operation with an empirical weight. These DCF trackers hardly benefit from the end-to-end training. In this paper, we propose the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network. Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training. To reduce model degradation during online update, we apply residual learning to take appearance changes into account. Extensive experiments on the benchmark datasets demonstrate that our CREST tracker performs favorably against state-of-the-art trackers.Comment: ICCV 2017. Project page: http://www.cs.cityu.edu.hk/~yibisong/iccv17/index.htm

    Visual tracking with online assessment and improved sampling strategy

    Get PDF
    The kernelized correlation filter (KCF) is one of the most successful trackers in computer vision today. However its performance may be significantly degraded in a wide range of challenging conditions such as occlusion and out of view. For many applications, particularly safety critical applications (e.g. autonomous driving), it is of profound importance to have consistent and reliable performance during all the operation conditions. This paper addresses this issue of the KCF based trackers by the introduction of two novel modules, namely online assessment of response map, and a strategy of combining cyclically shifted sampling with random sampling in deep feature space. A method of online assessment of response map is proposed to evaluate the tracking performance by constructing a 2-D Gaussian estimation model. Then a strategy of combining cyclically shifted sampling with random sampling in deep feature space is presented to improve the tracking performance when the tracking performance is assessed to be unreliable based on the response map. Therefore, the module of online assessment can be regarded as the trigger for the second module. Experiments verify the tracking performance is significantly improved particularly in challenging conditions as demonstrated by both quantitative and qualitative comparisons of the proposed tracking algorithm with the state-of-the-art tracking algorithms on OTB-2013 and OTB-2015 datasets

    High Speed Tracking With A Fourier Domain Kernelized Correlation Filter

    Full text link
    It is challenging to design a high speed tracking approach using l1-norm due to its non-differentiability. In this paper, a new kernelized correlation filter is introduced by leveraging the sparsity attribute of l1-norm based regularization to design a high speed tracker. We combine the l1-norm and l2-norm based regularizations in one Huber-type loss function, and then formulate an optimization problem in the Fourier Domain for fast computation, which enables the tracker to adaptively ignore the noisy features produced from occlusion and illumination variation, while keep the advantages of l2-norm based regression. This is achieved due to the attribute of Convolution Theorem that the correlation in spatial domain corresponds to an element-wise product in the Fourier domain, resulting in that the l1-norm optimization problem could be decomposed into multiple sub-optimization spaces in the Fourier domain. But the optimized variables in the Fourier domain are complex, which makes using the l1-norm impossible if the real and imaginary parts of the variables cannot be separated. However, our proposed optimization problem is formulated in such a way that their real part and imaginary parts are indeed well separated. As such, the proposed optimization problem can be solved efficiently to obtain their optimal values independently with closed-form solutions. Extensive experiments on two large benchmark datasets demonstrate that the proposed tracking algorithm significantly improves the tracking accuracy of the original kernelized correlation filter (KCF) while with little sacrifice on tracking speed. Moreover, it outperforms the state-of-the-art approaches in terms of accuracy, efficiency, and robustness

    Robust Visual Tracking using Multi-Frame Multi-Feature Joint Modeling

    Full text link
    It remains a huge challenge to design effective and efficient trackers under complex scenarios, including occlusions, illumination changes and pose variations. To cope with this problem, a promising solution is to integrate the temporal consistency across consecutive frames and multiple feature cues in a unified model. Motivated by this idea, we propose a novel correlation filter-based tracker in this work, in which the temporal relatedness is reconciled under a multi-task learning framework and the multiple feature cues are modeled using a multi-view learning approach. We demonstrate the resulting regression model can be efficiently learned by exploiting the structure of blockwise diagonal matrix. A fast blockwise diagonal matrix inversion algorithm is developed thereafter for efficient online tracking. Meanwhile, we incorporate an adaptive scale estimation mechanism to strengthen the stability of scale variation tracking. We implement our tracker using two types of features and test it on two benchmark datasets. Experimental results demonstrate the superiority of our proposed approach when compared with other state-of-the-art trackers. project homepage http://bmal.hust.edu.cn/project/KMF2JMTtracking.htmlComment: This paper has been accepted by IEEE Transactions on Circuits and Systems for Video Technology. The MATLAB code of our method is available from our project homepage http://bmal.hust.edu.cn/project/KMF2JMTtracking.htm

    Long-Term Visual Object Tracking Benchmark

    Full text link
    We propose a new long video dataset (called Track Long and Prosper - TLP) and benchmark for single object tracking. The dataset consists of 50 HD videos from real world scenarios, encompassing a duration of over 400 minutes (676K frames), making it more than 20 folds larger in average duration per sequence and more than 8 folds larger in terms of total covered duration, as compared to existing generic datasets for visual tracking. The proposed dataset paves a way to suitably assess long term tracking performance and train better deep learning architectures (avoiding/reducing augmentation, which may not reflect real world behaviour). We benchmark the dataset on 17 state of the art trackers and rank them according to tracking accuracy and run time speeds. We further present thorough qualitative and quantitative evaluation highlighting the importance of long term aspect of tracking. Our most interesting observations are (a) existing short sequence benchmarks fail to bring out the inherent differences in tracking algorithms which widen up while tracking on long sequences and (b) the accuracy of trackers abruptly drops on challenging long sequences, suggesting the potential need of research efforts in the direction of long-term tracking.Comment: ACCV 2018 (Oral

    Rate-Adaptive Neural Networks for Spatial Multiplexers

    Full text link
    In resource-constrained environments, one can employ spatial multiplexing cameras to acquire a small number of measurements of a scene, and perform effective reconstruction or high-level inference using purely data-driven neural networks. However, once trained, the measurement matrix and the network are valid only for a single measurement rate (MR) chosen at training time. To overcome this drawback, we answer the following question: How can we jointly design the measurement operator and the reconstruction/inference network so that the system can operate over a \textit{range} of MRs? To this end, we present a novel training algorithm, for learning \textbf{\textit{rate-adaptive}} networks. Using standard datasets, we demonstrate that, when tested over a range of MRs, a rate-adaptive network can provide high quality reconstruction over a the entire range, resulting in up to about 15 dB improvement over previous methods, where the network is valid for only one MR. We demonstrate the effectiveness of our approach for sample-efficient object tracking where video frames are acquired at dynamically varying MRs. We also extend this algorithm to learn the measurement operator in conjunction with image recognition networks. Experiments on MNIST and CIFAR-10 confirm the applicability of our algorithm to different tasks
    corecore