5,134 research outputs found
Keyframe-based monocular SLAM: design, survey, and future directions
Extensive research in the field of monocular SLAM for the past fifteen years
has yielded workable systems that found their way into various applications in
robotics and augmented reality. Although filter-based monocular SLAM systems
were common at some time, the more efficient keyframe-based solutions are
becoming the de facto methodology for building a monocular SLAM system. The
objective of this paper is threefold: first, the paper serves as a guideline
for people seeking to design their own monocular SLAM according to specific
environmental constraints. Second, it presents a survey that covers the various
keyframe-based monocular SLAM systems in the literature, detailing the
components of their implementation, and critically assessing the specific
strategies made in each proposed solution. Third, the paper provides insight
into the direction of future research in this field, to address the major
limitations still facing monocular SLAM; namely, in the issues of illumination
changes, initialization, highly dynamic motion, poorly textured scenes,
repetitive textures, map maintenance, and failure recovery
RT-SLAM: A Generic and Real-Time Visual SLAM Implementation
This article presents a new open-source C++ implementation to solve the SLAM
problem, which is focused on genericity, versatility and high execution speed.
It is based on an original object oriented architecture, that allows the
combination of numerous sensors and landmark types, and the integration of
various approaches proposed in the literature. The system capacities are
illustrated by the presentation of an inertial/vision SLAM approach, for which
several improvements over existing methods have been introduced, and that copes
with very high dynamic motions. Results with a hand-held camera are presented.Comment: 10 page
Towards Full Automated Drive in Urban Environments: A Demonstration in GoMentum Station, California
Each year, millions of motor vehicle traffic accidents all over the world
cause a large number of fatalities, injuries and significant material loss.
Automated Driving (AD) has potential to drastically reduce such accidents. In
this work, we focus on the technical challenges that arise from AD in urban
environments. We present the overall architecture of an AD system and describe
in detail the perception and planning modules. The AD system, built on a
modified Acura RLX, was demonstrated in a course in GoMentum Station in
California. We demonstrated autonomous handling of 4 scenarios: traffic lights,
cross-traffic at intersections, construction zones and pedestrians. The AD
vehicle displayed safe behavior and performed consistently in repeated
demonstrations with slight variations in conditions. Overall, we completed 44
runs, encompassing 110km of automated driving with only 3 cases where the
driver intervened the control of the vehicle, mostly due to error in GPS
positioning. Our demonstration showed that robust and consistent behavior in
urban scenarios is possible, yet more investigation is necessary for full scale
roll-out on public roads.Comment: Accepted to Intelligent Vehicles Conference (IV 2017
Real-time Monocular Object SLAM
We present a real-time object-based SLAM system that leverages the largest
object database to date. Our approach comprises two main components: 1) a
monocular SLAM algorithm that exploits object rigidity constraints to improve
the map and find its real scale, and 2) a novel object recognition algorithm
based on bags of binary words, which provides live detections with a database
of 500 3D objects. The two components work together and benefit each other: the
SLAM algorithm accumulates information from the observations of the objects,
anchors object features to especial map landmarks and sets constrains on the
optimization. At the same time, objects partially or fully located within the
map are used as a prior to guide the recognition algorithm, achieving higher
recall. We evaluate our proposal on five real environments showing improvements
on the accuracy of the map and efficiency with respect to other
state-of-the-art techniques
The LRU Rover for Autonomous Planetary Exploration and its Success in the SpaceBotCamp Challenge
The task of planetary exploration poses many challenges for a robot system, from weight and size constraints to sensors and actuators suitable for extraterrestrial environment conditions. As there is a significant communication delay to other planets, the efficient operation of a robot system requires a high level of autonomy. In this work, we present the Light Weight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and the application of stereo cameras as its main sensor ensures the applicability to space missions. We implemented software components for self-localization in GPS-denied environments, environment mapping, object search and localization and for the autonomous pickup and assembly of objects with its arm. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore a moon-like rough-terrain environment, locate and collect two objects and assemble them after transport to a third object - which the LRU did on its first try, in half of the time and fully autonomous
Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
This paper presents a robotic pick-and-place system that is capable of
grasping and recognizing both known and novel objects in cluttered
environments. The key new feature of the system is that it handles a wide range
of object categories without needing any task-specific training data for novel
objects. To achieve this, it first uses a category-agnostic affordance
prediction algorithm to select and execute among four different grasping
primitive behaviors. It then recognizes picked objects with a cross-domain
image classification framework that matches observed images to product images.
Since product images are readily available for a wide range of objects (e.g.,
from the web), the system works out-of-the-box for novel objects without
requiring any additional training data. Exhaustive experimental results
demonstrate that our multi-affordance grasping achieves high success rates for
a wide variety of objects in clutter, and our recognition algorithm achieves
high accuracy for both known and novel grasped objects. The approach was part
of the MIT-Princeton Team system that took 1st place in the stowing task at the
2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are
available online at http://arc.cs.princeton.eduComment: Project webpage: http://arc.cs.princeton.edu Summary video:
https://youtu.be/6fG7zwGfIk
- …