3,539 research outputs found
Adaptive Perception, State Estimation, and Navigation Methods for Mobile Robots
In this cumulative habilitation, publications with focus on robotic perception, self-localization, tracking, navigation, and human-machine interfaces have been selected. While some of the publications present research on a PR2 household robot in the Robotics Learning Lab of the University of California Berkeley on vision and machine learning tasks, most of the publications present research results while working at the AutoNOMOS-Labs at Freie Universität Berlin, with focus on control, planning and object tracking for the autonomous vehicles "MadeInGermany" and "e-Instein"
Efficient Autonomous Navigation for Planetary Rovers with Limited Resources
Rovers operating on Mars are in need of more and more autonomous features to ful ll their
challenging mission requirements. However, the inherent constraints of space systems make
the implementation of complex algorithms an expensive and difficult task. In this paper
we propose a control architecture for autonomous navigation. Efficient implementations of
autonomous features are built on top of the current ExoMars navigation method, enhancing
the safety and traversing capabilities of the rover. These features allow the rover to detect
and avoid hazards and perform long traverses by following a roughly safe path planned by
operators on ground. The control architecture implementing the proposed navigation mode
has been tested during a field test campaign on a planetary analogue terrain. The experiments
evaluated the proposed approach, autonomously completing two long traverses while
avoiding hazards. The approach only relies on the optical Localization Cameras stereobench,
a sensor that is found in all rovers launched so far, and potentially allows for computationally
inexpensive long-range autonomous navigation in terrains of medium difficulty
Pseudo-labels for Supervised Learning on Dynamic Vision Sensor Data, Applied to Object Detection under Ego-motion
In recent years, dynamic vision sensors (DVS), also known as event-based
cameras or neuromorphic sensors, have seen increased use due to various
advantages over conventional frame-based cameras. Using principles inspired by
the retina, its high temporal resolution overcomes motion blurring, its high
dynamic range overcomes extreme illumination conditions and its low power
consumption makes it ideal for embedded systems on platforms such as drones and
self-driving cars. However, event-based data sets are scarce and labels are
even rarer for tasks such as object detection. We transferred discriminative
knowledge from a state-of-the-art frame-based convolutional neural network
(CNN) to the event-based modality via intermediate pseudo-labels, which are
used as targets for supervised learning. We show, for the first time,
event-based car detection under ego-motion in a real environment at 100 frames
per second with a test average precision of 40.3% relative to our annotated
ground truth. The event-based car detector handles motion blur and poor
illumination conditions despite not explicitly trained to do so, and even
complements frame-based CNN detectors, suggesting that it has learnt
generalized visual representations
- …