683 research outputs found

    3D multi-robot patrolling with a two-level coordination strategy

    Get PDF
    Teams of UGVs patrolling harsh and complex 3D environments can experience interference and spatial conflicts with one another. Neglecting the occurrence of these events crucially hinders both soundness and reliability of a patrolling process. This work presents a distributed multi-robot patrolling technique, which uses a two-level coordination strategy to minimize and explicitly manage the occurrence of conflicts and interference. The first level guides the agents to single out exclusive target nodes on a topological map. This target selection relies on a shared idleness representation and a coordination mechanism preventing topological conflicts. The second level hosts coordination strategies based on a metric representation of space and is supported by a 3D SLAM system. Here, each robot path planner negotiates spatial conflicts by applying a multi-robot traversability function. Continuous interactions between these two levels ensure coordination and conflicts resolution. Both simulations and real-world experiments are presented to validate the performances of the proposed patrolling strategy in 3D environments. Results show this is a promising solution for managing spatial conflicts and preventing deadlocks

    SA-Net: Deep Neural Network for Robot Trajectory Recognition from RGB-D Streams

    Full text link
    Learning from demonstration (LfD) and imitation learning offer new paradigms for transferring task behavior to robots. A class of methods that enable such online learning require the robot to observe the task being performed and decompose the sensed streaming data into sequences of state-action pairs, which are then input to the methods. Thus, recognizing the state-action pairs correctly and quickly in sensed data is a crucial prerequisite for these methods. We present SA-Net a deep neural network architecture that recognizes state-action pairs from RGB-D data streams. SA-Net performed well in two diverse robotic applications of LfD -- one involving mobile ground robots and another involving a robotic manipulator -- which demonstrates that the architecture generalizes well to differing contexts. Comprehensive evaluations including deployment on a physical robot show that \sanet{} significantly improves on the accuracy of the previous method that utilizes traditional image processing and segmentation.Comment: (in press

    COACHES Cooperative Autonomous Robots in Complex and Human Populated Environments

    Get PDF
    Public spaces in large cities are increasingly becoming complex and unwelcoming environments. Public spaces progressively become more hostile and unpleasant to use because of the overcrowding and complex information in signboards. It is in the interest of cities to make their public spaces easier to use, friendlier to visitors and safer to increasing elderly population and to citizens with disabilities. Meanwhile, we observe, in the last decade a tremendous progress in the development of robots in dynamic, complex and uncertain environments. The new challenge for the near future is to deploy a network of robots in public spaces to accomplish services that can help humans. Inspired by the aforementioned challenges, COACHES project addresses fundamental issues related to the design of a robust system of self-directed autonomous robots with high-level skills of environment modelling and scene understanding, distributed autonomous decision-making, short-term interacting with humans and robust and safe navigation in overcrowding spaces. To this end, COACHES will provide an integrated solution to new challenges on: (1) a knowledge-based representation of the environment, (2) human activities and needs estimation using Markov and Bayesian techniques, (3) distributed decision-making under uncertainty to collectively plan activities of assistance, guidance and delivery tasks using Decentralized Partially Observable Markov Decision Processes with efficient algorithms to improve their scalability and (4) a multi-modal and short-term human-robot interaction to exchange information and requests. COACHES project will provide a modular architecture to be integrated in real robots. We deploy COACHES at Caen city in a mall called “Rive de l’orne”. COACHES is a cooperative system consisting of ?xed cameras and the mobile robots. The ?xed cameras can do object detection, tracking and abnormal events detection (objects or behaviour). The robots combine these information with the ones perceived via their own sensor, to provide information through its multi-modal interface, guide people to their destinations, show tramway stations and transport goods for elderly people, etc.... The COACHES robots will use different modalities (speech and displayed information) to interact with the mall visitors, shopkeepers and mall managers. The project has enlisted an important an end-user (Caen la mer) providing the scenarios where the COACHES robots and systems will be deployed, and gather together universities with complementary competences from cognitive systems (SU), robust image/video processing (VUB, UNICAEN), and semantic scene analysis and understanding (VUB), Collective decision-making using decentralized partially observable Markov Decision Processes and multi-agent planning (UNICAEN, Sapienza), multi-modal and short-term human-robot interaction (Sapienza, UNICAEN

    3D Multi-Robot Exploration with a Two-Level Coordination Strategy and Prioritization

    Full text link
    This work presents a 3D multi-robot exploration framework for a team of UGVs moving on uneven terrains. The framework was designed by casting the two-level coordination strategy presented in [1] into the context of multi-robot exploration. The resulting distributed exploration technique minimizes and explicitly manages the occurrence of conflicts and interferences in the robot team. Each robot selects where to scan next by using a receding horizon next-best-view approach [2]. A sampling-based tree is directly expanded on segmented traversable regions of the terrain 3D map to generate the candidate next viewpoints. During the exploration, users can assign locations with higher priorities on-demand to steer the robot exploration toward areas of interest. The proposed framework can be also used to perform coverage tasks in the case a map of the environment is a priori provided as input. An open-source implementation is available online

    SAFETY-GUARANTEED TASK PLANNING FOR BIPEDAL NAVIGATION IN PARTIALLY OBSERVABLE ENVIRONMENTS

    Get PDF
    Bipedal robots are becoming more capable as basic hardware and control challenges are being overcome, however reasoning about safety at the task and motion planning levels has been largely underexplored. I would like to make key steps towards guaranteeing safe locomotion in cluttered environments in the presence of humans or other dynamic obstacles by designing a hierarchical task planning framework that incorporates safety guarantees at each level. This layered planning framework is composed of a coarse high-level symbolic navigation planner and a lower-level local action planner. A belief abstraction at the global navigation planning level enables belief estimation of non-visible dynamic obstacle states and guarantees navigation safety with collision avoidance. Both planning layers employ linear temporal logic for a reactive game synthesis between the robot and its environment while incorporating lower level safe locomotion keyframe policies into formal task specification design. The high-level symbolic navigation planner has been extended to leverage the capabilities of a heterogeneous multi-agent team to resolve environment assumption violations that appear at runtime. Modifications in the navigation planner in conjunction with a coordination layer allow each agent to guarantee immediate safety and eventual task completion in the presence of an assumption violation if another agent exists that can resolve said violation, e.g. a door is closed that another dexterous agent can open. The planning framework leverages the expressive nature and formal guarantees of LTL to generate provably correct controllers for complex robotic systems. The use of belief space planning for dynamic obstacle belief tracking and heterogeneous robot capabilities to assist one another when environment assumptions are violated allows the planning framework to reduce the conservativeness traditionally associated with using formal methods for robot planning.M.S

    Model-predictive target defense by team of unmanned surface vehicles operating in uncertain environments

    Full text link
    • …
    corecore