44,167 research outputs found
Dynamic Objects Segmentation for Visual Localization in Urban Environments
Visual localization and mapping is a crucial capability to address many
challenges in mobile robotics. It constitutes a robust, accurate and
cost-effective approach for local and global pose estimation within prior maps.
Yet, in highly dynamic environments, like crowded city streets, problems arise
as major parts of the image can be covered by dynamic objects. Consequently,
visual odometry pipelines often diverge and the localization systems
malfunction as detected features are not consistent with the precomputed 3D
model. In this work, we present an approach to automatically detect dynamic
object instances to improve the robustness of vision-based localization and
mapping in crowded environments. By training a convolutional neural network
model with a combination of synthetic and real-world data, dynamic object
instance masks are learned in a semi-supervised way. The real-world data can be
collected with a standard camera and requires minimal further post-processing.
Our experiments show that a wide range of dynamic objects can be reliably
detected using the presented method. Promising performance is demonstrated on
our own and also publicly available datasets, which also shows the
generalization capabilities of this approach.Comment: 4 pages, submitted to the IROS 2018 Workshop "From Freezing to
Jostling Robots: Current Challenges and New Paradigms for Safe Robot
Navigation in Dense Crowds
DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments
Simultaneous Localization and Mapping (SLAM) is considered to be a
fundamental capability for intelligent mobile robots. Over the past decades,
many impressed SLAM systems have been developed and achieved good performance
under certain circumstances. However, some problems are still not well solved,
for example, how to tackle the moving objects in the dynamic environments, how
to make the robots truly understand the surroundings and accomplish advanced
tasks. In this paper, a robust semantic visual SLAM towards dynamic
environments named DS-SLAM is proposed. Five threads run in parallel in
DS-SLAM: tracking, semantic segmentation, local mapping, loop closing, and
dense semantic map creation. DS-SLAM combines semantic segmentation network
with moving consistency check method to reduce the impact of dynamic objects,
and thus the localization accuracy is highly improved in dynamic environments.
Meanwhile, a dense semantic octo-tree map is produced, which could be employed
for high-level tasks. We conduct experiments both on TUM RGB-D dataset and in
the real-world environment. The results demonstrate the absolute trajectory
accuracy in DS-SLAM can be improved by one order of magnitude compared with
ORB-SLAM2. It is one of the state-of-the-art SLAM systems in high-dynamic
environments. Now the code is available at our github:
https://github.com/ivipsourcecode/DS-SLAMComment: 7 pages, accepted at the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2018). Now the code is available at our
github: https://github.com/ivipsourcecode/DS-SLA
The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping
Many tasks performed by autonomous vehicles such as road marking detection,
object tracking, and path planning are simpler in bird's-eye view. Hence,
Inverse Perspective Mapping (IPM) is often applied to remove the perspective
effect from a vehicle's front-facing camera and to remap its images into a 2D
domain, resulting in a top-down view. Unfortunately, however, this leads to
unnatural blurring and stretching of objects at further distance, due to the
resolution of the camera, limiting applicability. In this paper, we present an
adversarial learning approach for generating a significantly improved IPM from
a single camera image in real time. The generated bird's-eye-view images
contain sharper features (e.g. road markings) and a more homogeneous
illumination, while (dynamic) objects are automatically removed from the scene,
thus revealing the underlying road layout in an improved fashion. We
demonstrate our framework using real-world data from the Oxford RobotCar
Dataset and show that scene understanding tasks directly benefit from our
boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures,
accepted at IV 201
Towards Odor-Sensitive Mobile Robots
J. Monroy, J. Gonzalez-Jimenez, "Towards Odor-Sensitive Mobile Robots", Electronic Nose Technologies and Advances in Machine Olfaction, IGI Global, pp. 244--263, 2018, doi:10.4018/978-1-5225-3862-2.ch012
Versión preprint, con permiso del editorOut of all the components of a mobile robot, its sensorial system is undoubtedly among the most critical
ones when operating in real environments. Until now, these sensorial systems mostly relied on range
sensors (laser scanner, sonar, active triangulation) and cameras. While electronic noses have barely
been employed, they can provide a complementary sensory information, vital for some applications, as
with humans. This chapter analyzes the motivation of providing a robot with gas-sensing capabilities
and also reviews some of the hurdles that are preventing smell from achieving the importance of other
sensing modalities in robotics. The achievements made so far are reviewed to illustrate the current status
on the three main fields within robotics olfaction: the classification of volatile substances, the spatial
estimation of the gas dispersion from sparse measurements, and the localization of the gas source within
a known environment
- …