740 research outputs found
Increasing the Efficiency of 6-DoF Visual Localization Using Multi-Modal Sensory Data
Localization is a key requirement for mobile robot autonomy and human-robot
interaction. Vision-based localization is accurate and flexible, however, it
incurs a high computational burden which limits its application on many
resource-constrained platforms. In this paper, we address the problem of
performing real-time localization in large-scale 3D point cloud maps of
ever-growing size. While most systems using multi-modal information reduce
localization time by employing side-channel information in a coarse manner (eg.
WiFi for a rough prior position estimate), we propose to inter-weave the map
with rich sensory data. This multi-modal approach achieves two key goals
simultaneously. First, it enables us to harness additional sensory data to
localise against a map covering a vast area in real-time; and secondly, it also
allows us to roughly localise devices which are not equipped with a camera. The
key to our approach is a localization policy based on a sequential Monte Carlo
estimator. The localiser uses this policy to attempt point-matching only in
nodes where it is likely to succeed, significantly increasing the efficiency of
the localization process. The proposed multi-modal localization system is
evaluated extensively in a large museum building. The results show that our
multi-modal approach not only increases the localization accuracy but
significantly reduces computational time.Comment: Presented at IEEE-RAS International Conference on Humanoid Robots
(Humanoids) 201
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
ViWiD: Leveraging WiFi for Robust and Resource-Efficient SLAM
Recent interest towards autonomous navigation and exploration robots for
indoor applications has spurred research into indoor Simultaneous Localization
and Mapping (SLAM) robot systems. While most of these SLAM systems use Visual
and LiDAR sensors in tandem with an odometry sensor, these odometry sensors
drift over time. To combat this drift, Visual SLAM systems deploy compute and
memory intensive search algorithms to detect `Loop Closures', which make the
trajectory estimate globally consistent. To circumvent these resource (compute
and memory) intensive algorithms, we present ViWiD, which integrates WiFi and
Visual sensors in a dual-layered system. This dual-layered approach separates
the tasks of local and global trajectory estimation making ViWiD resource
efficient while achieving on-par or better performance to state-of-the-art
Visual SLAM. We demonstrate ViWiD's performance on four datasets, covering over
1500 m of traversed path and show 4.3x and 4x reduction in compute and memory
consumption respectively compared to state-of-the-art Visual and Lidar SLAM
systems with on par SLAM performance
Autonomous 3D mapping and surveillance of mines with MAVs
A dissertation Submitted to the Faculty of Science, University of the
Witwatersrand, Johannesburg, for the degree of Master of Science.
12 July 2017.The mapping of mines, both operational and abandoned, is a long, di cult and occasionally
dangerous task especially in the latter case. Recent developments in active and passive consumer
grade sensors, as well as quadcopter drones present the opportunity to automate these
challenging tasks providing cost and safety bene ts. The goal of this research is to develop an
autonomous vision-based mapping system that employs quadrotor drones to explore and map
sections of mine tunnels. The system is equipped with inexpensive, structured light, depth cameras
in place of traditional laser scanners, making the quadrotor setup more viable to produce in
bulk. A modi ed version of Microsoft's Kinect Fusion algorithm is used to construct 3D point
clouds in real-time as the agents traverse the scene. Finally, the generated and merged point
clouds from the system are compared with those produced by current Lidar scanners.LG201
Low-Cost Multiple-MAV SLAM Using Open Source Software
We demonstrate a multiple micro aerial vehicle (MAV) system capable of supporting autonomous exploration and navigation in unknown environments using only a sensor commonly found in low-cost, commercially available MAVs—a front-facing monocular camera. We adapt a popular open source monocular SLAM library, ORB-SLAM, to support multiple inputs and present a system capable of effective cross-map alignment that can be theoretically generalized for use with other monocular SLAM libraries. Using our system, a single central ground control station is capable of supporting up to five MAVs simultaneously without a loss in mapping quality as compared to single-MAV ORB-SLAM. We conduct testing using both benchmark datasets and real-world trials to demonstrate the capability and real-time effectiveness
- …