12,472 research outputs found

    Experiments in cooperative human multi-robot navigation

    Get PDF
    In this paper, we consider the problem of a group of autonomous mobile robots and a human moving coordinately in a real-world implementation. The group moves throughout a dynamic and unstructured environment. The key problem to be solved is the inclusion of a human in a real multi-robot system and consequently the multiple robot motion coordination. We present a set of performance metrics (system efficiency and percentage of time in formation) and a novel flexible formation definition whereby a formation control strategy both in simulation and in real-world experiments of a human multi-robot system is presented. The formation control proposed is stable and effective by means of its uniform dispersion, cohesion and flexibility

    Multi-robot team formation control in the GUARDIANS project

    Get PDF
    Purpose The GUARDIANS multi-robot team is to be deployed in a large warehouse in smoke. The team is to assist firefighters search the warehouse in the event or danger of a fire. The large dimensions of the environment together with development of smoke which drastically reduces visibility, represent major challenges for search and rescue operations. The GUARDIANS robots guide and accompany the firefighters on site whilst indicating possible obstacles and the locations of danger and maintaining communications links. Design/methodology/approach In order to fulfill the aforementioned tasks the robots need to exhibit certain behaviours. Among the basic behaviours are capabilities to stay together as a group, that is, generate a formation and navigate while keeping this formation. The control model used to generate these behaviours is based on the so-called social potential field framework, which we adapt to the specific tasks required for the GUARDIANS scenario. All tasks can be achieved without central control, and some of the behaviours can be performed without explicit communication between the robots. Findings The GUARDIANS environment requires flexible formations of the robot team: the formation has to adapt itself to the circumstances. Thus the application has forced us to redefine the concept of a formation. Using the graph-theoretic terminology, we can say that a formation may be stretched out as a path or be compact as a star or wheel. We have implemented the developed behaviours in simulation environments as well as on real ERA-MOBI robots commonly referred to as Erratics. We discuss advantages and shortcomings of our model, based on the simulations as well as on the implementation with a team of Erratics.</p

    A robot swarm assisting a human fire-fighter

    Get PDF
    Emergencies in industrial warehouses are a major concern for fire-fighters. The large dimensions, together with the development of dense smoke that drastically reduces visibility, represent major challenges. The GUARDIANS robot swarm is designed to assist fire-fighters in searching a large warehouse. In this paper we discuss the technology developed for a swarm of robots assisting fire-fighters. We explain the swarming algorithms that provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also the means to locate the robots and humans. Thus, the robot swarm is able to provide guidance information to the humans. Together with the fire-fighters we explored how the robot swarm should feed information back to the human fire-fighter. We have designed and experimented with interfaces for presenting swarm-based information to human beings

    Q-CP: Learning Action Values for Cooperative Planning

    Get PDF
    Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance

    MOMA: Visual Mobile Marker Odometry

    Full text link
    In this paper, we present a cooperative odometry scheme based on the detection of mobile markers in line with the idea of cooperative positioning for multiple robots [1]. To this end, we introduce a simple optimization scheme that realizes visual mobile marker odometry via accurate fixed marker-based camera positioning and analyse the characteristics of errors inherent to the method compared to classical fixed marker-based navigation and visual odometry. In addition, we provide a specific UAV-UGV configuration that allows for continuous movements of the UAV without doing stops and a minimal caterpillar-like configuration that works with one UGV alone. Finally, we present a real-world implementation and evaluation for the proposed UAV-UGV configuration
    • …
    corecore