7 research outputs found

    Distributed covering by ant-robots using evaporating traces

    Full text link

    Deep Reinforcement Learning for Complete Coverage Path Planning in Unknown Environments

    Get PDF
    Mobile robots must operate autonomously, often in unknown and unstructured environments. To achieve this objective, a robot must be able to correctly perceive its environment, plan its path, and move around safely, without human supervision. Navigation from an initial position to a target lo- cation has been a challenging problem in robotics. This work examined the particular navigation task requiring complete coverage planning in outdoor environments. A motion planner based on Deep Reinforcement Learning is proposed where a Deep Q-network is trained to learn a control policy to approximate the optimal strategy, using a dynamic map of the environment. In addition to this path planning algorithm, a computer vision system is presented as a way to capture the images of a stereo camera embedded on the robot, detect obstacles and update the workspace map. Simulation results show that the algorithm generalizes well to different types of environments. After multiple sequences of training of the Reinforcement Learning agent, the virtual mobile robot is able to cover the whole space with a coverage rate of over 80% on average, starting from a varying initial position, while avoiding obstacles by using relying on local sensory information. The experiments also demonstrate that the DQN agent was able to better perform the coverage when compared to a human

    Characterisation of a nuclear cave environment utilising an autonomous swarm of heterogeneous robots

    Get PDF
    As nuclear facilities come to the end of their operational lifetime, safe decommissioning becomes a more prevalent issue. In many such facilities there exist ‘nuclear caves’. These caves constitute areas that may have been entered infrequently, or even not at all, since the construction of the facility. Due to this, the topography and nature of the contents of these nuclear caves may be unknown in a number of critical aspects, such as the location of dangerous substances or significant physical blockages to movement around the cave. In order to aid safe decommissioning, autonomous robotic systems capable of characterising nuclear cave environments are desired. The research put forward in this thesis seeks to answer the question: is it possible to utilise a heterogeneous swarm of autonomous robots for the remote characterisation of a nuclear cave environment? This is achieved through examination of the three key components comprising a heterogeneous swarm: sensing, locomotion and control. It will be shown that a heterogeneous swarm is not only capable of performing this task, it is preferable to a homogeneous swarm. This is due to the increased sensory and locomotive capabilities, coupled with more efficient explorational prowess when compared to a homogeneous swarm

    Fusion de données multi capteurs pour la détection et le suivi d'objets mobiles à partir d'un véhicule autonome

    Get PDF
    La perception est un point clé pour le fonctionnement d'un véhicule autonome ou même pour un véhicule fournissant des fonctions d'assistance. Un véhicule observe le monde externe à l'aide de capteurs et construit un modèle interne de l'environnement extérieur. Il met à jour en continu ce modèle de l'environnement en utilisant les dernières données des capteurs. Dans ce cadre, la perception peut être divisée en deux étapes : la première partie, appelée SLAM (Simultaneous Localization And Mapping) s'intéresse à la construction d'une carte de l'environnement extérieur et à la localisation du véhicule hôte dans cette carte, et deuxième partie traite de la détection et du suivi des objets mobiles dans l'environnement (DATMO pour Detection And Tracking of Moving Objects). En utilisant des capteurs laser de grande précision, des résultats importants ont été obtenus par les chercheurs. Cependant, avec des capteurs laser de faible résolution et des données bruitées, le problème est toujours ouvert, en particulier le problème du DATMO. Dans cette thèse nous proposons d'utiliser la vision (mono ou stéréo) couplée à un capteur laser pour résoudre ce problème. La première contribution de cette thèse porte sur l'identification et le développement de trois niveaux de fusion. En fonction du niveau de traitement de l'information capteur avant le processus de fusion, nous les appelons "fusion bas niveau", "fusion au niveau de la détection" et "fusion au niveau du suivi". Pour la fusion bas niveau, nous avons utilisé les grilles d'occupations. Pour la fusion au niveau de la détection, les objets détectés par chaque capteur sont fusionnés pour avoir une liste d'objets fusionnés. La fusion au niveau du suivi requiert le suivi des objets pour chaque capteur et ensuite on réalise la fusion entre les listes d'objets suivis. La deuxième contribution de cette thèse est le développement d'une technique rapide pour trouver les bords de route à partir des données du laser et en utilisant cette information nous supprimons de nombreuses fausses alarmes. Nous avons en effet observé que beaucoup de fausses alarmes apparaissent sur le bord de la route. La troisième contribution de cette thèse est le développement d'une solution complète pour la perception avec un capteur laser et des caméras stéréo-vision et son intégration sur un démonstrateur du projet européen Intersafe-2. Ce projet s'intéresse à la sécurité aux intersections et vise à y réduire les blessures et les accidents mortels. Dans ce projet, nous avons travaillé en collaboration avec Volkswagen, l'Université Technique de Cluj-Napoca, en Roumanie et l'INRIA Paris pour fournir une solution complète de perception et d'évaluation des risques pour le démonstrateur de Volkswagen.Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Distributed navigation of multi-robot systems for sensing coverage

    Full text link
    A team of coordinating mobile robots equipped with operation specific sensors can perform different coverage tasks. If the required number of robots in the team is very large then a centralized control system becomes a complex strategy. There are also some areas where centralized communication turns into an issue. So, a team of mobile robots for coverage tasks should have the ability of decentralized or distributed decision making. This thesis investigates decentralized control of mobile robots specifically for coverage problems. A decentralized control strategy is ideally based on local information and it can offer flexibility in case there is an increment or decrement in the number of mobile robots. We perform a broad survey of the existing literature for coverage control problems. There are different approaches associated with decentralized control strategy for coverage control problems. We perform a comparative review of these approaches and use the approach based on simple local coordination rules. These locally computed nearest neighbour rules are used to develop decentralized control algorithms for coverage control problems. We investigate this extensively used nearest neighbour rule-based approach for developing coverage control algorithms. In this approach, a mobile robot gives an equal importance to every neighbour robot coming under its communication range. We develop our control approach by making some of the mobile robots playing a more influential role than other members of the team. We develop the control algorithm based on nearest neighbour rules with weighted average functions. The approach based on this control strategy becomes efficient in terms of achieving a consensus on control inputs, say heading angle, velocity, etc. The decentralized control of mobile robots can also exhibit a cyclic behaviour under some physical constraints like a quantized orientation of the mobile robot. We further investigate the cyclic behaviour appearing due to the quantized control of mobile robots under some conditions. Our nearest neighbour rule-based approach offers a biased strategy in case of cyclic behaviour appearing in the team of mobile robots. We consider a clustering technique inside the team of mobile robots. Our decentralized control strategy calculates the similarity measure among the neighbours of a mobile robot. The team of mobile robots with the similarity measure based approach becomes efficient in achieving a fast consensus like on heading angle or velocity. We perform a rigorous mathematical analysis of our developed approach. We also develop a condition based on relaxed criteria for achieving consensus on velocity or heading angle of the mobile robots. Our validation approach is based on mathematical arguments and extensive computer simulations

    SWARM INTELLIGENCE AND STIGMERGY: ROBOTIC IMPLEMENTATION OF FORAGING BEHAVIOR

    Get PDF
    Swarm intelligence in multi-robot systems has become an important area of research within collective robotics. Researchers have gained inspiration from biological systems and proposed a variety of industrial, commercial, and military robotics applications. In order to bridge the gap between theory and application, a strong focus is required on robotic implementation of swarm intelligence. To date, theoretical research and computer simulations in the field have dominated, with few successful demonstrations of swarm-intelligent robotic systems. In this thesis, a study of intelligent foraging behavior via indirect communication between simple individual agents is presented. Models of foraging are reviewed and analyzed with respect to the system dynamics and dependence on important parameters. Computer simulations are also conducted to gain an understanding of foraging behavior in systems with large populations. Finally, a novel robotic implementation is presented. The experiment successfully demonstrates cooperative group foraging behavior without direct communication. Trail-laying and trail-following are employed to produce the required stigmergic cooperation. Real robots are shown to achieve increased task efficiency, as a group, resulting from indirect interactions. Experimental results also confirm that trail-based group foraging systems can adapt to dynamic environments

    Mapping by Cooperative Mobile Robots.

    Get PDF
    Constructing a system of intelligent robotic mapping agents that can function in an unstructured and unknown environment is a challenging task. With the exploration of our solar system as well as our own planet requiring more robust mapping agents, and with the drastic drop in the price of technology versus the gains in performance, robotic mapping is becoming a focus of research like never before. Efforts are underway to send mobile robots to map bodies within our solar system. While much of the research in robotic map construction has been focused on building maps used by the robotic agents themselves, very little has been done in building maps usable by humans. And yet it is the human that drives the need for mapping solutions. We propose a computational framework for building mobile robotic mapping systems to be deployed in unknown environments. This is the first work known to address the general problem of mapping in unknown terrain under the affect of error in readings, operations and systems that employs more than a single robot. The system draws upon the strengths from research in various robotic related areas by selecting those components and ideas that show promise when applied to mapping for human reading via a distributed network of heterogeneous mobile robots. This application of multiple mobile robots and the application to human end-users is a new direction in robotics research. We also propose and develop a new paradigm for storing mapping-agent generated data in a way that allows rapid map construction and correction to compensate for detected errors. We experimentally test the paradigm on a simulated robotic environment and analyze the results and show that there is a definite gain from correction, particularly in error rich environments. We also develop methods by which to apply corrections to the map and test their effectiveness. Finally we propose some extensions to this work and suggest research in areas not completely covered by our discussion
    corecore