2,676 research outputs found
Track, then Decide: Category-Agnostic Vision-based Multi-Object Tracking
The most common paradigm for vision-based multi-object tracking is
tracking-by-detection, due to the availability of reliable detectors for
several important object categories such as cars and pedestrians. However,
future mobile systems will need a capability to cope with rich human-made
environments, in which obtaining detectors for every possible object category
would be infeasible. In this paper, we propose a model-free multi-object
tracking approach that uses a category-agnostic image segmentation method to
track objects. We present an efficient segmentation mask-based tracker which
associates pixel-precise masks reported by the segmentation. Our approach can
utilize semantic information whenever it is available for classifying objects
at the track level, while retaining the capability to track generic unknown
objects in the absence of such information. We demonstrate experimentally that
our approach achieves performance comparable to state-of-the-art
tracking-by-detection methods for popular object categories such as cars and
pedestrians. Additionally, we show that the proposed method can discover and
robustly track a large variety of other objects.Comment: ICRA'18 submissio
Lidar with Velocity: Correcting Moving Objects Point Cloud Distortion from Oscillating Scanning Lidars by Fusion with Camera
Lidar point cloud distortion from moving object is an important problem in
autonomous driving, and recently becomes even more demanding with the emerging
of newer lidars, which feature back-and-forth scanning patterns. Accurately
estimating moving object velocity would not only provide a tracking capability
but also correct the point cloud distortion with more accurate description of
the moving object. Since lidar measures the time-of-flight distance but with a
sparse angular resolution, the measurement is precise in the radial measurement
but lacks angularly. Camera on the other hand provides a dense angular
resolution. In this paper, Gaussian-based lidar and camera fusion is proposed
to estimate the full velocity and correct the lidar distortion. A probabilistic
Kalman-filter framework is provided to track the moving objects, estimate their
velocities and simultaneously correct the point clouds distortions. The
framework is evaluated on real road data and the fusion method outperforms the
traditional ICP-based and point-cloud only method. The complete working
framework is open-sourced
(https://github.com/ISEE-Technology/lidar-with-velocity) to accelerate the
adoption of the emerging lidars
Technical Report: Anytime Computation and Control for Autonomous Systems
The correct and timely completion of the sensing and action loop is of utmost importance in safety critical autonomous systems. A crucial part of the performance of this feedback control loop are the computation time and accuracy of the estimator which produces state estimates used by the controller. These state estimators, especially those used for localization, often use computationally expensive perception algorithms like visual object tracking. With on-board computers on autonomous robots being computationally limited, the computation time of a perception-based estimation algorithm can at times be high enough to result in poor control performance. In this work, we develop a framework for co-design of anytime estimation and robust control algorithms while taking into account computation delays and estimation inaccuracies. This is achieved by constructing a perception-based anytime estimator from an off-the-shelf perception-based estimation algorithm, and in the process we obtain a trade-off curve for its computation time versus estimation error. This information is used in the design of a robust predictive control algorithm that at run-time decides a contract for the estimator, or the mode of operation of estimator, in addition to trying to achieve its control objectives at a reduced computation energy cost. In cases where the estimation delay can result in possibly degraded control performance, we provide an optimal manner in which the controller can use this trade-off curve to reduce estimation delay at the cost of higher inaccuracy, all the while guaranteeing that control objectives are robustly satisfied. Through experiments on a hexrotor platform running a visual odometry algorithm for state estimation, we show how our method results in upto a 10% improvement in control performance while saving 5-6% in computation energy as compared to a method that does not leverage the co-design
- …