7,554 research outputs found

    All Weather Perception: Joint Data Association, Tracking, and Classification for Autonomous Ground Vehicles

    Full text link
    A novel probabilistic perception algorithm is presented as a real-time joint solution to data association, object tracking, and object classification for an autonomous ground vehicle in all-weather conditions. The presented algorithm extends a Rao-Blackwellized Particle Filter originally built with a particle filter for data association and a Kalman filter for multi-object tracking (Miller et al. 2011a) to now also include multiple model tracking for classification. Additionally a state-of-the-art vision detection algorithm that includes heading information for autonomous ground vehicle (AGV) applications was implemented. Cornell's AGV from the DARPA Urban Challenge was upgraded and used to experimentally examine if and how state-of-the-art vision algorithms can complement or replace lidar and radar sensors. Sensor and algorithm performance in adverse weather and lighting conditions is tested. Experimental evaluation demonstrates robust all-weather data association, tracking, and classification where camera, lidar, and radar sensors complement each other inside the joint probabilistic perception algorithm.Comment: 35 pages, 21 figures, 14 table

    Deep Person Re-identification for Probabilistic Data Association in Multiple Pedestrian Tracking

    Full text link
    We present a data association method for vision-based multiple pedestrian tracking, using deep convolutional features to distinguish between different people based on their appearances. These re-identification (re-ID) features are learned such that they are invariant to transformations such as rotation, translation, and changes in the background, allowing consistent identification of a pedestrian moving through a scene. We incorporate re-ID features into a general data association likelihood model for multiple person tracking, experimentally validate this model by using it to perform tracking in two evaluation video sequences, and examine the performance improvements gained as compared to several baseline approaches. Our results demonstrate that using deep person re-ID for data association greatly improves tracking robustness to challenges such as occlusions and path crossings

    A bank of unscented Kalman filters for multimodal human perception with mobile service robots

    Get PDF
    A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints. In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot. Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics

    Multiple Object Tracking: A Literature Review

    Full text link
    Multiple Object Tracking (MOT) is an important computer vision problem which has gained increasing attention due to its academic and commercial potential. Although different kinds of approaches have been proposed to tackle this problem, it still remains challenging due to factors like abrupt appearance changes and severe object occlusions. In this work, we contribute the first comprehensive and most recent review on this problem. We inspect the recent advances in various aspects and propose some interesting directions for future research. To the best of our knowledge, there has not been any extensive review on this topic in the community. We endeavor to provide a thorough review on the development of this problem in recent decades. The main contributions of this review are fourfold: 1) Key aspects in a multiple object tracking system, including formulation, categorization, key principles, evaluation of an MOT are discussed. 2) Instead of enumerating individual works, we discuss existing approaches according to various aspects, in each of which methods are divided into different groups and each group is discussed in detail for the principles, advances and drawbacks. 3) We examine experiments of existing publications and summarize results on popular datasets to provide quantitative comparisons. We also point to some interesting discoveries by analyzing these results. 4) We provide a discussion about issues of MOT research, as well as some interesting directions which could possibly become potential research effort in the future

    A Random Finite Set Approach for Dynamic Occupancy Grid Maps with Real-Time Application

    Full text link
    Grid mapping is a well established approach for environment perception in robotic and automotive applications. Early work suggests estimating the occupancy state of each grid cell in a robot's environment using a Bayesian filter to recursively combine new measurements with the current posterior state estimate of each grid cell. This filter is often referred to as binary Bayes filter (BBF). A basic assumption of classical occupancy grid maps is a stationary environment. Recent publications describe bottom-up approaches using particles to represent the dynamic state of a grid cell and outline prediction-update recursions in a heuristic manner. This paper defines the state of multiple grid cells as a random finite set, which allows to model the environment as a stochastic, dynamic system with multiple obstacles, observed by a stochastic measurement system. It motivates an original filter called the probability hypothesis density / multi-instance Bernoulli (PHD/MIB) filter in a top-down manner. The paper presents a real-time application serving as a fusion layer for laser and radar sensor data and describes in detail a highly efficient parallel particle filter implementation. A quantitative evaluation shows that parameters of the stochastic process model affect the filter results as theoretically expected and that appropriate process and observation models provide consistent state estimation results

    Visual end-effector tracking using a 3D model-aided particle filter for humanoid robot platforms

    Full text link
    This paper addresses recursive markerless estimation of a robot's end-effector using visual observations from its cameras. The problem is formulated into the Bayesian framework and addressed using Sequential Monte Carlo (SMC) filtering. We use a 3D rendering engine and Computer Aided Design (CAD) schematics of the robot to virtually create images from the robot's camera viewpoints. These images are then used to extract information and estimate the pose of the end-effector. To this aim, we developed a particle filter for estimating the position and orientation of the robot's end-effector using the Histogram of Oriented Gradient (HOG) descriptors to capture robust characteristic features of shapes in both cameras and rendered images. We implemented the algorithm on the iCub humanoid robot and employed it in a closed-loop reaching scenario. We demonstrate that the tracking is robust to clutter, allows compensating for errors in the robot kinematics and servoing the arm in closed loop using vision

    PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

    Full text link
    Tracking 6D poses of objects from videos provides rich information to a robot in performing different tasks such as manipulation and navigation. In this work, we formulate the 6D object pose tracking problem in the Rao-Blackwellized particle filtering framework, where the 3D rotation and the 3D translation of an object are decoupled. This factorization allows our approach, called PoseRBPF, to efficiently estimate the 3D translation of an object along with the full distribution over the 3D rotation. This is achieved by discretizing the rotation space in a fine-grained manner, and training an auto-encoder network to construct a codebook of feature embeddings for the discretized rotations. As a result, PoseRBPF can track objects with arbitrary symmetries while still maintaining adequate posterior distributions. Our approach achieves state-of-the-art results on two 6D pose estimation benchmarks. A video showing the experiments can be found at https://youtu.be/lE5gjzRKWuAComment: Accepted to RSS 201

    SPF-CellTracker: Tracking multiple cells with strongly-correlated moves using a spatial particle filter

    Full text link
    Tracking many cells in time-lapse 3D image sequences is an important challenging task of bioimage informatics. Motivated by a study of brain-wide 4D imaging of neural activity in C. elegans, we present a new method of multi-cell tracking. Data types to which the method is applicable are characterized as follows: (i) cells are imaged as globular-like objects, (ii) it is difficult to distinguish cells based only on shape and size, (iii) the number of imaged cells ranges in several hundreds, (iv) moves of nearly-located cells are strongly correlated and (v) cells do not divide. We developed a tracking software suite which we call SPF-CellTracker. Incorporating dependency on cells' moves into prediction model is the key to reduce the tracking errors: cell-switching and coalescence of tracked positions. We model target cells' correlated moves as a Markov random field and we also derive a fast computation algorithm, which we call spatial particle filter. With the live-imaging data of nuclei of C. elegans neurons in which approximately 120 nuclei of neurons are imaged, we demonstrate an advantage of the proposed method over the standard particle filter and a method developed by Tokunaga et al. (2014).Comment: 14 pages, 6 figure

    Statistical Information Fusion for Multiple-View Sensor Data in Multi-Object Tracking

    Full text link
    This paper presents a novel statistical information fusion method to integrate multiple-view sensor data in multi-object tracking applications. The proposed method overcomes the drawbacks of the commonly used Generalized Covariance Intersection method, which considers constant weights allocated for sensors. Our method is based on enhancing the Generalized Covariance Intersection with adaptive weights that are automatically tuned based on the amount of information carried by the measurements from each sensor. To quantify information content, Cauchy-Schwarz divergence is used. Another distinguished characteristic of our method lies in the usage of the Labeled Multi-Bernoulli filter for multi-object tracking, in which the weight of each sensor can be separately adapted for each Bernoulli component of the filter. The results of numerical experiments show that our proposed method can successfully integrate information provided by multiple sensors with different fields of view. In such scenarios, our method significantly outperforms the state of art in terms of inclusion of all existing objects and tracking accuracy.Comment: 28 pages,7 figure

    Self-Driving Cars: A Survey

    Full text link
    We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the self-driving car developed at the Universidade Federal do Esp\'irito Santo (UFES), named Intelligent Autonomous Robotics Automobile (IARA). Finally, we list prominent self-driving car research platforms developed by academia and technology companies, and reported in the media
    • ā€¦
    corecore