143,005 research outputs found

    Feature Distilled Tracking

    Get PDF
    Feature extraction and representation is one of the most important components for fast, accurate, and robust visual tracking. Very deep convolutional neural networks (CNNs) provide effective tools for feature extraction with good generalization ability. However, extracting features using very deep CNN models needs high performance hardware due to its large computation complexity, which prohibits its extensions in real-time applications. To alleviate this problem, we aim at obtaining small and fast-to-execute shallow models based on model compression for visual tracking. Specifically, we propose a small feature distilled network (FDN) for tracking by imitating the intermediate representations of a much deeper network. The FDN extracts rich visual features with higher speed than the original deeper network. To further speed-up, we introduce a shift-and-stitch method to reduce the arithmetic operations, while preserving the spatial resolution of the distilled feature maps unchanged. Finally, a scale adaptive discriminative correlation filter is learned on the distilled feature for visual tracking to handle scale variation of the target. Comprehensive experimental results on object tracking benchmark datasets show that the proposed approach achieves 5x speed-up with competitive performance to the state-of-the-art deep trackers

    DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System

    Full text link
    This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance compared to state-of-the-art methods in terms of robustness and accuracy in adverse conditions. Our implementation's research-based Python API is publicly available on GitHub for further research and development: https://github.com/AbanobSoliman/DH-PTAM.Comment: Submitted for publication in IEEE RA-

    Efficient multi-view multi-target tracking using a distributed camera network

    Get PDF
    In this paper, we propose a multi-target tracking method using a distributed camera network, which can effectively handle the occlusion and reidenfication problems by combining advanced deep learning and distributed information fusion. The targets are first detected using a fast object detection method based on deep learning. We then combine the deep visual feature information and spatial trajectory information in the Hungarian algorithm for robust targets association. The deep visual feature information is extracted from a convolutional neural network, which is pre-trained using a large-scale person reidentification dataset. The spatial trajectories of multiple targets in our framework are derived from a multiple view information fusion method, which employs an information weighted consensus filter for fusion and tracking. In addition, we also propose an efficient track processing method for ID assignment using multiple view information. The experiments on public datasets show that the proposed method is robust to solve the occlusion problem and reidentification problem, and can achieve superior performance compared to the state of the art methods

    Generative Adversarial Networks for Online Visual Object Tracking Systems

    Get PDF
    Object Tracking is one of the essential tasks in computer vision domain as it has numerous applications in various fields, such as human-computer interaction, video surveillance, augmented reality, and robotics. Object Tracking refers to the process of detecting and locating the target object in a series of frames in a video. The state-of-the-art for tracking-by-detection framework is typically made up of two steps to track the target object. The first step is drawing multiple samples near the target region of the previous frame. The second step is classifying each sample as either the target object or the background. Visual object tracking remains one of the most challenging task due to variations in visual data such as target occlusion, background clutter, illumination changes, scale changes, as well as challenges stem from the tracking problem including fast motion, out of view, motion blur, deformation, and in and out planar rotation. These challenges continue to be tackled by researchers as they investigate more effective algorithms that are able to track any object under various changing conditions. To keep the research community motivated, there are several annual tracker benchmarking competitions organized to consolidate performance measures and evaluation protocols in different tracking subfields such as Visual Object Tracking VOT challenges and The Multiple Object Tracking MOT Challenges [1, 2]. Despite the excellent performance achieved with deep learning, modern deep tracking methods are still limited in several aspects. The variety of appearance changes over time remains a problem for deep trackers, owing to spatial overlap between positive samples. Furthermore, existing methods require high computational load and suffer from slow running speed. Recently, Generative Adversarial Networks (GANs) have shown excellent results in solving a variety of computer vision problems, making them attractive in investigating their potential use in achieving better results in other computer vision applications, namely, visual object tracking. In this thesis, we explore the impact of using Residual Network ResNet as an alternative feature extractor to Visual Geometry Group VGG which is commonly used in literature. Furthermore, we attempt to address the limitations of object tracking by exploiting the ongoing advancement in Generative Adversarial Networks. We describe a generative adversarial network intended to improve the tracker’s classifier during the online training phase. The network generates adaptive masks to augment the positive samples detected by the convolutional layer of the tracker’s model in order to improve the model’s classifier by making the samples more difficult. Then we integrate this network with Multi-Domain Convolutional Neural Network (MDNet) tracker and present the results. Furthermore, we introduce a novel tracker, MDResNet, by substituting the convolutional layers of MDNet that were originally taken from Visual Geometry Group (VGG-M) network with layers taken from Residual Deep Network (ResNet-50) and the results are compared. We also introduce a new tracker, Region of Interest with Adversarial Learning (ROIAL), by integrating the generative adversarial network with the Real-Time Multi-Domain Convolutional Network (RT-MDNet) tracker. We also integrate the GAN network with MDResNet and MDNet and compare the results with ROIAL
    • …
    corecore