2,073 research outputs found
MOMA: Visual Mobile Marker Odometry
In this paper, we present a cooperative odometry scheme based on the
detection of mobile markers in line with the idea of cooperative positioning
for multiple robots [1]. To this end, we introduce a simple optimization scheme
that realizes visual mobile marker odometry via accurate fixed marker-based
camera positioning and analyse the characteristics of errors inherent to the
method compared to classical fixed marker-based navigation and visual odometry.
In addition, we provide a specific UAV-UGV configuration that allows for
continuous movements of the UAV without doing stops and a minimal
caterpillar-like configuration that works with one UGV alone. Finally, we
present a real-world implementation and evaluation for the proposed UAV-UGV
configuration
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
Data-Efficient Decentralized Visual SLAM
Decentralized visual simultaneous localization and mapping (SLAM) is a
powerful tool for multi-robot applications in environments where absolute
positioning systems are not available. Being visual, it relies on cameras,
cheap, lightweight and versatile sensors, and being decentralized, it does not
rely on communication to a central ground station. In this work, we integrate
state-of-the-art decentralized SLAM components into a new, complete
decentralized visual SLAM system. To allow for data association and
co-optimization, existing decentralized visual SLAM systems regularly exchange
the full map data between all robots, incurring large data transfers at a
complexity that scales quadratically with the robot count. In contrast, our
method performs efficient data association in two stages: in the first stage a
compact full-image descriptor is deterministically sent to only one robot. In
the second stage, which is only executed if the first stage succeeded, the data
required for relative pose estimation is sent, again to only one robot. Thus,
data association scales linearly with the robot count and uses highly compact
place representations. For optimization, a state-of-the-art decentralized
pose-graph optimization method is used. It exchanges a minimum amount of data
which is linear with trajectory overlap. We characterize the resulting system
and identify bottlenecks in its components. The system is evaluated on publicly
available data and we provide open access to the code.Comment: 8 pages, submitted to ICRA 201
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
IMPLEMENTATION OF A LOCALIZATION-ORIENTED HRI FOR WALKING ROBOTS IN THE ROBOCUP ENVIRONMENT
This paper presents the design and implementation of a human–robot interface capable of evaluating robot localization performance and maintaining full control of robot behaviors in the RoboCup domain. The system consists of legged robots, behavior modules, an overhead visual tracking system, and a graphic user interface. A human–robot communication framework is designed for executing cooperative and competitive processing tasks between users and robots by using object oriented and modularized software architecture, operability, and functionality. Some experimental results are presented to show the performance of the proposed system based on simulated and real-time information. </jats:p
Dynamic graph-based search in unknown environments
A novel graph-based approach to search in unknown environments is presented. A virtual geometric structure is imposed on the environment represented in computer memory by a graph. Algorithms use this representation to coordinate a team of robots (or entities). Local discovery of environmental features cause dynamic expansion of the graph resulting in global exploration of the unknown environment. The algorithm is shown to have O(k.nH) time
complexity, where nH is the number of vertices of the discovered environment and 1 <= k <= nH. A maximum bound on the length of the resulting walk is given
Pushbroom Stereo for High-Speed Navigation in Cluttered Environments
We present a novel stereo vision algorithm that is capable of obstacle
detection on a mobile-CPU processor at 120 frames per second. Our system
performs a subset of standard block-matching stereo processing, searching only
for obstacles at a single depth. By using an onboard IMU and state-estimator,
we can recover the position of obstacles at all other depths, building and
updating a full depth-map at framerate.
Here, we describe both the algorithm and our implementation on a high-speed,
small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system
requires no external sensing or computation and is, to the best of our
knowledge, the first high-framerate stereo detection system running onboard a
small UAV
- …