22 research outputs found
A Framework for Interactive Teaching of Virtual Borders to Mobile Robots
The increasing number of robots in home environments leads to an emerging
coexistence between humans and robots. Robots undertake common tasks and
support the residents in their everyday life. People appreciate the presence of
robots in their environment as long as they keep the control over them. One
important aspect is the control of a robot's workspace. Therefore, we introduce
virtual borders to precisely and flexibly define the workspace of mobile
robots. First, we propose a novel framework that allows a person to
interactively restrict a mobile robot's workspace. To show the validity of this
framework, a concrete implementation based on visual markers is implemented.
Afterwards, the mobile robot is capable of performing its tasks while
respecting the new virtual borders. The approach is accurate, flexible and less
time consuming than explicit robot programming. Hence, even non-experts are
able to teach virtual borders to their robots which is especially interesting
in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure
End-to-end Driving via Conditional Imitation Learning
Deep networks trained on demonstrations of human driving have learned to
follow roads and avoid obstacles. However, driving policies trained via
imitation learning cannot be controlled at test time. A vehicle trained
end-to-end to imitate an expert cannot be guided to take a specific turn at an
upcoming intersection. This limits the utility of such systems. We propose to
condition imitation learning on high-level command input. At test time, the
learned driving policy functions as a chauffeur that handles sensorimotor
coordination but continues to respond to navigational commands. We evaluate
different architectures for conditional imitation learning in vision-based
driving. We conduct experiments in realistic three-dimensional simulations of
urban driving and on a 1/5 scale robotic truck that is trained to drive in a
residential area. Both systems drive based on visual input yet remain
responsive to high-level navigational commands. The supplementary video can be
viewed at https://youtu.be/cFtnflNe5fMComment: Published at the International Conference on Robotics and Automation
(ICRA), 201
Learning Ground Traversability from Simulations
Mobile ground robots operating on unstructured terrain must predict which
areas of the environment they are able to pass in order to plan feasible paths.
We address traversability estimation as a heightmap classification problem: we
build a convolutional neural network that, given an image representing the
heightmap of a terrain patch, predicts whether the robot will be able to
traverse such patch from left to right. The classifier is trained for a
specific robot model (wheeled, tracked, legged, snake-like) using simulation
data on procedurally generated training terrains; the trained classifier can be
applied to unseen large heightmaps to yield oriented traversability maps, and
then plan traversable paths. We extensively evaluate the approach in simulation
on six real-world elevation datasets, and run a real-robot validation in one
indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation