14 research outputs found

    Omnipotent Virtual Giant for Remote Human–Swarm Interaction

    Get PDF
    This paper proposes an intuitive human-swarm interaction framework inspired by our childhood memory in which we interacted with living ants by changing their positions and environments as if we were omnipotent relative to the ants. In virtual reality, analogously, we can be a super-powered virtual giant who can supervise a swarm of robots in a vast and remote environment by flying over or resizing the world, and coordinate them by picking and placing a robot or creating virtual walls. This work implements this idea by using Virtual Reality along with Leap Motion, which is then validated by proof-of-concept experiments using real and virtual mobile robots in mixed reality. We conduct a usability analysis to quantify the effectiveness of the overall system as well as the individual interfaces proposed in this work. The results reveal that the proposed method is intuitive and feasible for interaction with swarm robots, but may require appropriate training for the new end-user interface device

    Voronoi-Based Multi-Robot Autonomous Exploration in Unknown Environments via Deep Reinforcement Learning

    Get PDF
    Autonomous exploration is an important application of multi-vehicle systems, where a team of networked robots are coordinated to explore an unknown environment collaboratively. This technique has earned significant research interest due to its usefulness in search and rescue, fault detection and monitoring, localization and mapping, etc. In this paper, a novel cooperative exploration strategy is proposed for multiple mobile robots, which reduces the overall task completion time and energy costs compared to conventional methods. To efficiently navigate the networked robots during the collaborative tasks, a hierarchical control architecture is designed which contains a high-level decision making layer and a low-level target tracking layer. The proposed cooperative exploration approach is developed using dynamic Voronoi partitions, which minimizes duplicated exploration areas by assigning different target locations to individual robots. To deal with sudden obstacles in the unknown environment, an integrated deep reinforcement learning based collision avoidance algorithm is then proposed, which enables the control policy to learn from human demonstration data and thus improve the learning speed and performance. Finally, simulation and experimental results are provided to demonstrate the effectiveness of the proposed scheme

    Fault-tolerant cooperative navigation of networked UAV swarms for forest fire monitoring

    Get PDF
    Coordination of unmanned aerial vehicle (UAV) swarms has received significant attention due to its wide practical applications including search and rescue, cooperative exploration and target surveillance. Motivated by the flexibility of the UAVs and the recent advancement of graph-based cooperative control strategies, this paper aims to develop a fault-tolerant cooperation framework for networked UAVs with applications to forest fire monitoring. Firstly, a cooperative navigation strategy based on network graph theory is proposed to coordinate all the connected UAVs in a swarm in the presence of unknown disturbances. The stability of the aerial swarm system is guaranteed using the Lyapunov approach. In case of damage to the actuators of some of the UAVs during the mission, a decentralized task reassignment algorithm is then applied, which makes the UAV swarm more robust to uncertainties. Finally, a novel geometry-based collision avoidance approach using onboard sensory information is proposed to avoid potential collisions during the mission. The effectiveness and feasibility of the proposed framework are verified initially by simulations and then using real-world flight tests in outdoor environments

    Accelerated Sim-to-Real Deep Reinforcement Learning:Learning Collision Avoidance from Human Player

    No full text
    This paper presents a sensor-level mapless collision avoidance algorithm for use in mobile robots that map raw sensor data to linear and angular velocities and navigate in an unknown environment without a map. An efficient training strategy is proposed to allow a robot to learn from both human experience data and self-exploratory data. A game format simulation framework is designed to allow the human player to tele-operate the mobile robot to a goal and human action is also scored using the reward function. Both human player data and self-playing data are sampled using prioritized experience replay algorithm. The proposed algorithm and training strategy have been evaluated in two different experimental configurations: \textit{Environment 1}, a simulated cluttered environment, and \textit{Environment 2}, a simulated corridor environment, to investigate the performance. It was demonstrated that the proposed method achieved the same level of reward using only 16\% of the training steps required by the standard Deep Deterministic Policy Gradient (DDPG) method in Environment 1 and 20\% of that in Environment 2. In the evaluation of 20 random missions, the proposed method achieved no collision in less than 2~h and 2.5~h of training time in the two Gazebo environments respectively. The method also generated smoother trajectories than DDPG. The proposed method has also been implemented on a real robot in the real-world environment for performance evaluation. We can confirm that the trained model with the simulation software can be directly applied into the real-world scenario without further fine-tuning, further demonstrating its higher robustness than DDPG. The video and code are available: https://youtu.be/BmwxevgsdGc https://github.com/hanlinniu/turtlebot3_ddpg_collision_avoidanc
    corecore