43,275 research outputs found
DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments
Simultaneous Localization and Mapping (SLAM) is considered to be a
fundamental capability for intelligent mobile robots. Over the past decades,
many impressed SLAM systems have been developed and achieved good performance
under certain circumstances. However, some problems are still not well solved,
for example, how to tackle the moving objects in the dynamic environments, how
to make the robots truly understand the surroundings and accomplish advanced
tasks. In this paper, a robust semantic visual SLAM towards dynamic
environments named DS-SLAM is proposed. Five threads run in parallel in
DS-SLAM: tracking, semantic segmentation, local mapping, loop closing, and
dense semantic map creation. DS-SLAM combines semantic segmentation network
with moving consistency check method to reduce the impact of dynamic objects,
and thus the localization accuracy is highly improved in dynamic environments.
Meanwhile, a dense semantic octo-tree map is produced, which could be employed
for high-level tasks. We conduct experiments both on TUM RGB-D dataset and in
the real-world environment. The results demonstrate the absolute trajectory
accuracy in DS-SLAM can be improved by one order of magnitude compared with
ORB-SLAM2. It is one of the state-of-the-art SLAM systems in high-dynamic
environments. Now the code is available at our github:
https://github.com/ivipsourcecode/DS-SLAMComment: 7 pages, accepted at the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2018). Now the code is available at our
github: https://github.com/ivipsourcecode/DS-SLA
Real-Time Panoramic Tracking for Event Cameras
Event cameras are a paradigm shift in camera technology. Instead of full
frames, the sensor captures a sparse set of events caused by intensity changes.
Since only the changes are transferred, those cameras are able to capture quick
movements of objects in the scene or of the camera itself. In this work we
propose a novel method to perform camera tracking of event cameras in a
panoramic setting with three degrees of freedom. We propose a direct camera
tracking formulation, similar to state-of-the-art in visual odometry. We show
that the minimal information needed for simultaneous tracking and mapping is
the spatial position of events, without using the appearance of the imaged
scene point. We verify the robustness to fast camera movements and dynamic
objects in the scene on a recently proposed dataset and self-recorded
sequences.Comment: Accepted to International Conference on Computational Photography
201
Multisensor Poisson Multi-Bernoulli Filter for Joint Target-Sensor State Tracking
In a typical multitarget tracking (MTT) scenario, the sensor state is either
assumed known, or tracking is performed in the sensor's (relative) coordinate
frame. This assumption does not hold when the sensor, e.g., an automotive
radar, is mounted on a vehicle, and the target state should be represented in a
global (absolute) coordinate frame. Then it is important to consider the
uncertain location of the vehicle on which the sensor is mounted for MTT. In
this paper, we present a multisensor low complexity Poisson multi-Bernoulli MTT
filter, which jointly tracks the uncertain vehicle state and target states.
Measurements collected by different sensors mounted on multiple vehicles with
varying location uncertainty are incorporated sequentially based on the arrival
of new sensor measurements. In doing so, targets observed from a sensor mounted
on a well-localized vehicle reduce the state uncertainty of other poorly
localized vehicles, provided that a common non-empty subset of targets is
observed. A low complexity filter is obtained by approximations of the joint
sensor-feature state density minimizing the Kullback-Leibler divergence (KLD).
Results from synthetic as well as experimental measurement data, collected in a
vehicle driving scenario, demonstrate the performance benefits of joint
vehicle-target state tracking.Comment: 13 pages, 7 figure
- …