227 research outputs found
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups
A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper
UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether
This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle
(UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a
flying sensor but also as a tether attachment device. Two robots are connected
with a tether, allowing the UAV to anchor the tether to a structure located at
the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the
poor traversability of the UGV by not only providing a wider range of scanning
and mapping from the air, but also by allowing the UGV to climb steep terrains
with the winding of the tether. In addition, we present an autonomous framework
for the collaborative navigation and tether attachment in an unknown
environment. The UAV employs visual inertial navigation with 3D voxel mapping
and obstacle avoidance planning. The UGV makes use of the voxel map and
generates an elevation map to execute path planning based on a traversability
analysis. Furthermore, we compared the pros and cons of possible methods for
the tether anchoring from multiple points of view. To increase the probability
of successful anchoring, we evaluated the anchoring strategy with an
experiment. Finally, the feasibility and capability of our proposed system were
demonstrated by an autonomous mission experiment in the field with an obstacle
and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on
Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1
Navigation, localization and stabilization of formations of unmanned aerial and ground vehicles
A leader-follower formation driving algorithm developed for control of heterogeneous groups of unmanned micro aerial and ground vehicles stabilized under a top-view relative localization is presented in this paper. The core of the proposed method lies in a novel avoidance function, in which the entire 3D formation is represented by a convex hull projected along a desired path to be followed by the group. Such a representation of the formation provides non-collision trajectories of the robots and respects requirements of the direct visibility between the team members in environment with static as well as dynamic obstacles, which is crucial for the top-view localization. The algorithm is suited for utilization of a simple yet stable visual based navigation of the group (referred to as GeNav), which together with the on-board relative localization enables deployment of large teams of micro-scale robots in environments without any available global localization system. We formulate a novel Model Predictive Control (MPC) based concept that enables to respond to the changing environment and that provides a robust solution with team members' failure tolerance included. The performance of the proposed method is verified by numerical and hardware experiments inspired by reconnaissance and surveillance missions
Active Collaborative Localization in Heterogeneous Robot Teams
Accurate and robust state estimation is critical for autonomous navigation of
robot teams. This task is especially challenging for large groups of size,
weight, and power (SWAP) constrained aerial robots operating in
perceptually-degraded GPS-denied environments. We can, however, actively
increase the amount of perceptual information available to such robots by
augmenting them with a small number of more expensive, but less
resource-constrained, agents. Specifically, the latter can serve as sources of
perceptual information themselves. In this paper, we study the problem of
optimally positioning (and potentially navigating) a small number of more
capable agents to enhance the perceptual environment for their
lightweight,inexpensive, teammates that only need to rely on cameras and IMUs.
We propose a numerically robust, computationally efficient approach to solve
this problem via nonlinear optimization. Our method outperforms the standard
approach based on the greedy algorithm, while matching the accuracy of a
heuristic evolutionary scheme for global optimization at a fraction of its
running time. Ultimately, we validate our solution in both photorealistic
simulations and real-world experiments. In these experiments, we use
lidar-based autonomous ground vehicles as the more capable agents, and
vision-based aerial robots as their SWAP-constrained teammates. Our method is
able to reduce drift in visual-inertial odometry by as much as 90%, and it
outperforms random positioning of lidar-equipped agents by a significant
margin. Furthermore, our method can be generalized to different types of robot
teams with heterogeneous perception capabilities. It has a wide range of
applications, such as surveying and mapping challenging dynamic environments,
and enabling resilience to large-scale perturbations that can be caused by
earthquakes or storms.Comment: To appear in Robotics: Science and Systems (RSS) 202
System for deployment of groups of unmanned micro aerial vehicles in GPS-denied environments using onboard visual relative localization
A complex system for control of swarms of micro aerial vehicles (MAV), in literature also called as unmanned aerial vehicles (UAV) or unmanned aerial systems (UAS), stabilized via an onboard visual relative localization is described in this paper. The main purpose of this work is to verify the possibility of self-stabilization of multi-MAV groups without an external global positioning system. This approach enables the deployment of MAV swarms outside laboratory conditions, and it may be considered an enabling technique for utilizing fleets of MAVs in real-world scenarios. The proposed visual-based stabilization approach has been designed for numerous different multi-UAV robotic applications (leader-follower UAV formation stabilization, UAV swarm stabilization and deployment in surveillance scenarios, cooperative UAV sensory measurement) in this paper. Deployment of the system in real-world scenarios truthfully verifies its operational constraints, given by limited onboard sensing suites and processing capabilities. The performance of the presented approach (MAV control, motion planning, MAV stabilization, and trajectory planning) in multi-MAV applications has been validated by experimental results in indoor as well as in challenging outdoor environments (e.g., in windy conditions and in a former pit mine)
UAV based group coordination of UGVs
Coordination of autonomous mobile robots has received significant attention during the last two decades with the emergence of small, lightweight and low power embedded systems. Coordinated motion of heterogenous robots is important due to the fact that unique advantages of di erent robots might be combined to increase the overall task efficiency of the system. In this thesis, a new coordination framework is developed for a heterogeneous robot system, composed of multiple Unmanned Ground Vehicles (UGVs) and an Unmanned Aerial Vehicle (UAV), that operates in an environment where individual robots work collaboratively in order to accomplish a predefined goal. UAV, a quadrotor, detects the target in the environment and provides a feasible trajectory from an initial configuration to a final target location. UGVs, a group of nonholonomic wheeled mobile robots, follow a virtual leader which is created as the projection of UAV's 3D position onto the horizontal plane. The UAV broadcasts its position at certain frequency to all UGVs. Two different coordination models are developed. In the dynamic coordination model, reference trajectories for each robot is generated from the motion of nodal masses located at each UGV and connected by virtual springs and dampers. Springs have adaptable parameters that allow the desired formation to be achieved In the kinematic coordination model, the position of the virtual leader and distances from the two closest neighbors are directly utilized to create linear and angular velocity references for each UGV. Several coordinated tasks are presented and the results are verified by simulations where different number of UGVs are employed and certain amount of communication delays between the vehicles are also considered. Simulation results are quite promising and form a basis for future experimental work on the topic
- …