2,520 research outputs found

    Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning

    Full text link
    Developing a safe and efficient collision avoidance policy for multiple robots is challenging in the decentralized scenarios where each robot generate its paths without observing other robots' states and intents. While other distributed multi-robot collision avoidance systems exist, they often require extracting agent-level features to plan a local collision-free action, which can be computationally prohibitive and not robust. More importantly, in practice the performance of these methods are much lower than their centralized counterparts. We present a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity. As a first step toward reducing the performance gap between decentralized and centralized methods, we present a multi-scenario multi-stage training framework to find an optimal policy which is trained over a large number of robots on rich, complex environments simultaneously using a policy gradient based reinforcement learning algorithm. We validate the learned sensor-level collision avoidance policy in a variety of simulated scenarios with thorough performance evaluations and show that the final learned policy is able to find time efficient, collision-free paths for a large-scale robot system. We also demonstrate that the learned policy can be well generalized to new scenarios that do not appear in the entire training period, including navigating a heterogeneous group of robots and a large-scale scenario with 100 robots. Videos are available at https://sites.google.com/view/drlmac

    Highly efficient Localisation utilising Weightless neural systems

    Get PDF
    Efficient localisation is a highly desirable property for an autonomous navigation system. Weightless neural networks offer a real-time approach to robotics applications by reducing hardware and software requirements for pattern recognition techniques. Such networks offer the potential for objects, structures, routes and locations to be easily identified and maps constructed from fused limited sensor data as information becomes available. We show that in the absence of concise and complex information, localisation can be obtained using simple algorithms from data with inherent uncertainties using a combination of Genetic Algorithm techniques applied to a Weightless Neural Architecture

    Efficient exploration of unknown indoor environments using a team of mobile robots

    Get PDF
    Whenever multiple robots have to solve a common task, they need to coordinate their actions to carry out the task efficiently and to avoid interferences between individual robots. This is especially the case when considering the problem of exploring an unknown environment with a team of mobile robots. To achieve efficient terrain coverage with the sensors of the robots, one first needs to identify unknown areas in the environment. Second, one has to assign target locations to the individual robots so that they gather new and relevant information about the environment with their sensors. This assignment should lead to a distribution of the robots over the environment in a way that they avoid redundant work and do not interfere with each other by, for example, blocking their paths. In this paper, we address the problem of efficiently coordinating a large team of mobile robots. To better distribute the robots over the environment and to avoid redundant work, we take into account the type of place a potential target is located in (e.g., a corridor or a room). This knowledge allows us to improve the distribution of robots over the environment compared to approaches lacking this capability. To autonomously determine the type of a place, we apply a classifier learned using the AdaBoost algorithm. The resulting classifier takes laser range data as input and is able to classify the current location with high accuracy. We additionally use a hidden Markov model to consider the spatial dependencies between nearby locations. Our approach to incorporate the information about the type of places in the assignment process has been implemented and tested in different environments. The experiments illustrate that our system effectively distributes the robots over the environment and allows them to accomplish their mission faster compared to approaches that ignore the place labels

    Heterogeneous multi-robot system for mapping environmental variables of greenhouses

    Get PDF
    The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops) without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments.The research leading to these results has received funding from the RoboCity2030-III-CM project (Robótica aplicada a la mejora de la calidad de vida de los ciudadanos. fase III; S2013/MIT-2748), funded by Programas de Actividades I+ D en la Comunidad de Madrid and co-funded by Structural Funds of the EU, and from the DPI2014-56985-Rproject (Protección robotizada de infraestructuras críticas) funded by the Ministerio de Economía y Competitividad of Gobierno de España. This work is framed on the SAVIER (Situational Awareness Virtual EnviRonment) Project, which is both supported and funded by Airbus Defence & Space. The experiments were performed in an educational greenhouse of the E.T.S.I.Agrónomos of Technical University of Madrid.Peer Reviewe

    Brain-Computer Interface: comparison of two control modes to drive a virtual robot

    Get PDF
    A Brain-Computer Interface (BCI) is a system that enables communication and control that is not based on muscular movements, but on brain activity. Some of these systems are based on discrimination of different mental tasks; usually they match the number of mental tasks to the number of control commands. Previous research at the University of Málaga (UMA-BCI) have proposed a BCI system to freely control an external device, letting the subjects choose among several navigation commands using only one active mental task (versus any other mental activity). Although the navigation paradigm proposed in this system has been proved useful for continuous movements, if the user wants to move medium or large distances, he/she needs to keep the effort of the MI task in order to keep the command. In this way, the aim of this work was to test a navigation paradigm based on the brain-switch mode for ‘forward’ command. In this mode, the subjects used the mental task to switch their state on /off: they stopped if they were moving forward and vice versa. Initially, twelve healthy and untrained subjects participated in this study, but due to a lack of control in previous session, only four subjects participated in the experiment, in which they had to control a virtual robot using two paradigms: one based on continuous mode and another based on switch mode. Preliminary results show that both paradigms can be used to navigate through virtual environments, although with the first one the times needed to complete a path were notably lower.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Autonomous control of underground mining vehicles using reactive navigation

    Get PDF
    Describes how many of the navigation techniques developed by the robotics research community over the last decade may be applied to a class of underground mining vehicles (LHDs and haul trucks). We review the current state-of-the-art in this area and conclude that there are essentially two basic methods of navigation applicable. We describe an implementation of a reactive navigation system on a 30 tonne LHD which has achieved full-speed operation at a production mine
    corecore