13,568 research outputs found
Learning to Fly by Crashing
How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid
obstacles? One approach is to use a small dataset collected by human experts:
however, high capacity learning algorithms tend to overfit when trained with
little data. An alternative is to use simulation. But the gap between
simulation and real world remains large especially for perception problems. The
reason most research avoids using large-scale real data is the fear of crashes!
In this paper, we propose to bite the bullet and collect a dataset of crashes
itself! We build a drone whose sole purpose is to crash into objects: it
samples naive trajectories and crashes into random objects. We crash our drone
11,500 times to create one of the biggest UAV crash dataset. This dataset
captures the different ways in which a UAV can crash. We use all this negative
flying data in conjunction with positive data sampled from the same
trajectories to learn a simple yet powerful policy for UAV navigation. We show
that this simple self-supervised model is quite effective in navigating the UAV
even in extremely cluttered environments with dynamic obstacles including
humans. For supplementary video see: https://youtu.be/u151hJaGKU
Did You Miss the Sign? A False Negative Alarm System for Traffic Sign Detectors
Object detection is an integral part of an autonomous vehicle for its
safety-critical and navigational purposes. Traffic signs as objects play a
vital role in guiding such systems. However, if the vehicle fails to locate any
critical sign, it might make a catastrophic failure. In this paper, we propose
an approach to identify traffic signs that have been mistakenly discarded by
the object detector. The proposed method raises an alarm when it discovers a
failure by the object detector to detect a traffic sign. This approach can be
useful to evaluate the performance of the detector during the deployment phase.
We trained a single shot multi-box object detector to detect traffic signs and
used its internal features to train a separate false negative detector (FND).
During deployment, FND decides whether the traffic sign detector (TSD) has
missed a sign or not. We are using precision and recall to measure the accuracy
of FND in two different datasets. For 80% recall, FND has achieved 89.9%
precision in Belgium Traffic Sign Detection dataset and 90.8% precision in
German Traffic Sign Recognition Benchmark dataset respectively. To the best of
our knowledge, our method is the first to tackle this critical aspect of false
negative detection in robotic vision. Such a fail-safe mechanism for object
detection can improve the engagement of robotic vision systems in our daily
life.Comment: Submitted to the 2019 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2019
- …