5,063 research outputs found

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Non-overlapping multi-camera detection and tracking of vehicles in tunnel surveillance

    Get PDF
    We propose a real-time multi-camera tracking approach to follow vehicles in a tunnel surveillance environment with multiple non-overlapping cameras. In such system, vehicles have to be tracked in each camera and passed correctly from one camera to another through the tunnel. This task becomes extremely difficult when intra-camera errors are accumulated. Most typical issues to solve in tunnel scenes are due to low image quality, poor illumination and lighting from the vehicles. Vehicle detection is performed using Adaboost detector, speeded up by separating different cascades for cars and trucks improving general accuracy of detection. A Kalman Filter with two observations, given by the vehicle detector and an averaged optical flow vector, is used for single-camera tracking. Information from collected tracks is used for feeding the inter-camera matching algorithm, which measures the correlation of Radon transform-like projections between the vehicle images. Our main contribution is a novel method to reduce the false positive rate induced by the detection stage. We impose recall over precision in the detection correctness, and identify false positives patterns which are then included subsequently in a high-level decision making step. Results are presented for the case of 3 cameras placed consecutively in an inter-city tunnel. We demonstrate the increased tracking performance of our method compared to existing Bayesian filtering techniques for vehicle tracking in tunnel surveillance

    Spatial Pyramid Context-Aware Moving Object Detection and Tracking for Full Motion Video and Wide Aerial Motion Imagery

    Get PDF
    A robust and fast automatic moving object detection and tracking system is essential to characterize target object and extract spatial and temporal information for different functionalities including video surveillance systems, urban traffic monitoring and navigation, robotic. In this dissertation, I present a collaborative Spatial Pyramid Context-aware moving object detection and Tracking system. The proposed visual tracker is composed of one master tracker that usually relies on visual object features and two auxiliary trackers based on object temporal motion information that will be called dynamically to assist master tracker. SPCT utilizes image spatial context at different level to make the video tracking system resistant to occlusion, background noise and improve target localization accuracy and robustness. We chose a pre-selected seven-channel complementary features including RGB color, intensity and spatial pyramid of HoG to encode object color, shape and spatial layout information. We exploit integral histogram as building block to meet the demands of real-time performance. A novel fast algorithm is presented to accurately evaluate spatially weighted local histograms in constant time complexity using an extension of the integral histogram method. Different techniques are explored to efficiently compute integral histogram on GPU architecture and applied for fast spatio-temporal median computations and 3D face reconstruction texturing. We proposed a multi-component framework based on semantic fusion of motion information with projected building footprint map to significantly reduce the false alarm rate in urban scenes with many tall structures. The experiments on extensive VOTC2016 benchmark dataset and aerial video confirm that combining complementary tracking cues in an intelligent fusion framework enables persistent tracking for Full Motion Video and Wide Aerial Motion Imagery.Comment: PhD Dissertation (162 pages

    Field programmable Gate Array based Real Time Object Tracking using Partial Least Square Analysis

    Get PDF
    In this paper, we proposed an object tracking algorithm in real time implementation of moving object tracking system using Field programmable gate array (FPGA). Object tracking is considered as a binary classification problem and one of the approaches to this problem is that to extract appropriate features from the appearance of the object based on partial least square (PLS) analysis method, which is a low dimension reduction technique in the subspace. In this method, the adaptive appearance model integrated with PLS analysis is used for continuous update of the appearance change of the target over time. For robust and efficient tracking, particle filtering is used in between every two consecutive frames of the video. This has implemented using Cadence and Virtuoso software integrated environment with MATLAB. The experimental results are performed on challenging video sequences to show the performance of the proposed tracking algorithm using FPGA in real time

    Deeply-Integrated Feature Tracking for Embedded Navigation

    Get PDF
    The Air Force Institute of Technology (AFIT) is investigating techniques to improve aircraft navigation using low-cost imaging and inertial sensors. Stationary features tracked within the image are used to improve the inertial navigation estimate. These features are tracked using a correspondence search between frames. Previous research investigated aiding these correspondence searches using inertial measurements (i.e., stochastic projection). While this research demonstrated the benefits of further sensor integration, it still relied on robust feature descriptors (e.g., SIFT or SURF) to obtain a reliable correspondence match in the presence of rotation and scale changes. Unfortunately, these robust feature extraction algorithms are computationally intensive and require significant resources for real-time operation. Simpler feature extraction algorithms are much more efficient, but their feature descriptors are not invariant to scale, rotation, or affine warping which limits matching performance during arbitrary motion. This research uses inertial measurements to predict not only the location of the feature in the next image but also the feature descriptor, resulting in robust correspondence matching with low computational overhead. This novel technique, called deeply-integrated feature tracking, is exercised using real imagery. The term deep integration is derived from the fact inertial information is used to aid the image processing. The navigation experiments presented demonstrate the performance of the new algorithm in relation to the previous work. Further experiments also investigate a monocular camera setup necessary for actual flight testing. Results show that the new algorithm is 12 times faster than its predecessor while still producing an accurate trajectory. Thirty-percent more features were initialized using the new tracker over the previous algorithm. However, low-level aiding techniques successfully reduced the number of features initialized indicating a more robust tracking solution through deep integration

    Self-Selective Correlation Ship Tracking Method for Smart Ocean System

    Full text link
    In recent years, with the development of the marine industry, navigation environment becomes more complicated. Some artificial intelligence technologies, such as computer vision, can recognize, track and count the sailing ships to ensure the maritime security and facilitates the management for Smart Ocean System. Aiming at the scaling problem and boundary effect problem of traditional correlation filtering methods, we propose a self-selective correlation filtering method based on box regression (BRCF). The proposed method mainly include: 1) A self-selective model with negative samples mining method which effectively reduces the boundary effect in strengthening the classification ability of classifier at the same time; 2) A bounding box regression method combined with a key points matching method for the scale prediction, leading to a fast and efficient calculation. The experimental results show that the proposed method can effectively deal with the problem of ship size changes and background interference. The success rates and precisions were higher than Discriminative Scale Space Tracking (DSST) by over 8 percentage points on the marine traffic dataset of our laboratory. In terms of processing speed, the proposed method is higher than DSST by nearly 22 Frames Per Second (FPS)

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure
    • …
    corecore