562 research outputs found

    A distributed architecture for unmanned aerial systems based on publish/subscribe messaging and simultaneous localisation and mapping (SLAM) testbed

    Get PDF
    A dissertation submitted in fulfilment for the degree of Master of Science. School of Computational and Applied Mathematics, University of the Witwatersrand, Johannesburg, South Africa, November 2017The increased capabilities and lower cost of Micro Aerial Vehicles (MAVs) unveil big opportunities for a rapidly growing number of civilian and commercial applications. Some missions require direct control using a receiver in a point-to-point connection, involving one or very few MAVs. An alternative class of mission is remotely controlled, with the control of the drone automated to a certain extent using mission planning software and autopilot systems. For most emerging missions, there is a need for more autonomous, cooperative control of MAVs, as well as more complex data processing from sensors like cameras and laser scanners. In the last decade, this has given rise to an extensive research from both academia and industry. This research direction applies robotics and computer vision concepts to Unmanned Aerial Systems (UASs). However, UASs are often designed for specific hardware and software, thus providing limited integration, interoperability and re-usability across different missions. In addition, there are numerous open issues related to UAS command, control and communication(C3), and multi-MAVs. We argue and elaborate throughout this dissertation that some of the recent standardbased publish/subscribe communication protocols can solve many of these challenges and meet the non-functional requirements of MAV robotics applications. This dissertation assesses the MQTT, DDS and TCPROS protocols in a distributed architecture of a UAS control system and Ground Control Station software. While TCPROS has been the leading robotics communication transport for ROS applications, MQTT and DDS are lightweight enough to be used for data exchange between distributed systems of aerial robots. Furthermore, MQTT and DDS are based on industry standards to foster communication interoperability of “things”. Both protocols have been extensively presented to address many of today’s needs related to networks based on the internet of things (IoT). For example, MQTT has been used to exchange data with space probes, whereas DDS was employed for aerospace defence and applications of smart cities. We designed and implemented a distributed UAS architecture based on each publish/subscribe protocol TCPROS, MQTT and DDS. The proposed communication systems were tested with a vision-based Simultaneous Localisation and Mapping (SLAM) system involving three Parrot AR Drone2 MAVs. Within the context of this study, MQTT and DDS messaging frameworks serve the purpose of abstracting UAS complexity and heterogeneity. Additionally, these protocols are expected to provide low-latency communication and scale up to meet the requirements of real-time remote sensing applications. The most important contribution of this work is the implementation of a complete distributed communication architecture for multi-MAVs. Furthermore, we assess the viability of this architecture and benchmark the performance of the protocols in relation to an autonomous quadcopter navigation testbed composed of a SLAM algorithm, an extended Kalman filter and a PID controller.XL201

    Mobile robot transportation in laboratory automation

    Get PDF
    In this dissertation a new mobile robot transportation system is developed for the modern laboratory automation to connect the distributed automated systems and workbenches. In the system, a series of scientific and technical robot indoor issues are presented and solved, including the multiple robot control strategy, the indoor transportation path planning, the hybrid robot indoor localization, the recharging optimization, the robot-automated door interface, the robot blind arm grasping & placing, etc. The experiments show the proposed system and methods are effective and efficient

    AGNI: an API for the control of automomous service robots

    Get PDF
    With the continuum growth of Internet connected devices, the scalability of the protocols used for communication between them is facing a new set of challenges. In robotics these communications protocols are an essential element, and must be able to accomplish with the desired communication. In a context of a multi-­‐‑agent platform, the main types of Internet communication protocols used in robotics, mission planning and task allocation problems will be revised. It will be defined how to represent a message and how to cope with their transport between devices in a distributed environment, reviewing all the layers of the messaging process. A review of the ROS platform is also presented with the intent of integrating the already existing communication protocols with the ServRobot, a mobile autonomous robot, and the DVA, a distributed autonomous surveillance system. This is done with the objective of assigning missions to ServRobot in a security context

    Supervisory Autonomous Control of Homogeneous Teams of Unmanned Ground Vehicles, with Application to the Multi-Autonomous Ground-Robotic International Challenge

    Get PDF
    There are many different proposed methods for Supervisory Control of semi-autonomous robots. There have also been numerous software simulations to determine how many robots can be successfully supervised by a single operator, a problem known as fan-out, but only a few studies have been conducted using actual robots. As evidenced by the MAGIC 2010 competition, there is increasing interest in amplifying human capacity by allowing one or a few operators to supervise a team of robotic agents. This interest provides motivation to perform a more in-depth evaluation of many autonomous/semiautonomous robots an operator can successfully supervise. The MAGIC competition allowed two human operators to supervise a team of robots in a complex search-and mapping operation. The MAGIC competition provided the best opportunity to date to study through practice the actual fan-out with multiple semi-autonomous robots. The current research provides a step forward in determining fan-out by offering an initial framework for testing multi-robot teams under supervisory control. One conclusion of this research is that the proposed framework is not complex or complete enough to provide conclusive data for determining fan-out. Initial testing using operators with limited training suggests that there is no obvious pattern to the operator interaction time with robots based on the number of robots and the complexity of the tasks. The initial hypothesis that, for a given task and robot there exists an optimal robot-to-operator efficiency ratio, could not be confirmed. Rather, the data suggests that the ability of the operator is a dominant factor in studies involving operators with limited training supervising small teams of robots. It is possible that, with more extensive training, operator times would become more closely related to the number of agents and the complexity of the tasks. The work described in this thesis proves an experimental framework and a preliminary data set for other researchers to critique and build upon. As the demand increases for agent-to-operator ratios greater than one, the need to expand upon research in this area will continue to grow

    Evolutionary Optimization Techniques for 3D Simultaneous Localization and Mapping

    Get PDF
    Mención Internacional en el título de doctorMobile robots are growing up in applications to move through indoors and outdoors environments, passing from teleoperated applications to autonomous applications like exploring or navigating. For a robot to move through a particular location, it needs to gather information about the scenario using sensors. These sensors allow the robot to observe, depending on the sensor data type. Cameras mostly give information in two dimensions, with colors and pixels representing an image. Range sensors give distances from the robot to obstacles. Depth Cameras mix both technologies to expand their information to three-dimensional information. Light Detection and Ranging (LiDAR) provides information about the distance to the sensor but expands its range to planes and three dimensions alongside precision. So, mobile robots use those sensors to scan the scenario while moving. If the robot already has a map, the sensors measure, and the robot finds features that correspond to features on the map to localize itself. Men have used Maps as a specialized form of representing the environment for more than 5000 years, becoming a piece of important information in today’s daily basics. Maps are used to navigate from one place to another, localize something inside some boundaries, or as a form of documentation of essential features. So naturally, an intuitive way of making an autonomous mobile robot is to implement geometrical information maps to represent the environment. On the other hand, if the robot does not have a previous map, it should build it while moving around. The robot computes the sensor information with the odometer sensor information to achieve this task. However, sensors have their own flaws due to precision, calibration, or accuracy. Furthermore, moving a robot has its physical constraints and faults that may occur randomly, like wheel drifting or mechanical miscalibration that may make the odometers fail in the measurement, causing misalignment during the map building. A novel technique was presented in the mid-90s to solve this problem and overpass the uncertainty of sensors while the robot is building the map, the Simultaneous Localization and Mapping algorithm (SLAM). Its goal is to build a map while the robot’s position is corrected based on the information of two or more consecutive scans matched together or find the rigid registration vector between them. This algorithm has been broadly studied and developed for almost 25 years. Nonetheless, it is highly relevant in innovations, modifications, and adaptations due to the advances in new sensors and the complexity of the scenarios in emerging mobile robotics applications. The scan matching algorithm aims to find a pose vector representing the transformation or movement between two robot observations by finding the best possible value after solving an equation representing a good transformation. It means searching for a solution in an optimum way. Typically this optimization process has been solved using classical optimization algorithms, like Newton’s algorithm or solving gradient and second derivatives formulations, yet this requires an initial guess or initial state that helps the algorithm point in the right direction, most of the time by getting this information from the odometers or inertial sensors. Although, it is not always possible to have or trust this information, as some scenarios are complex and reckon sensors fail. In order to solve this problem, this research presents the uses of evolutionary optimization algorithms, those with a meta-heuristics definition based on iterative evolution that mimics optimization processes that do not need previous information to search a limited range for solutions to solve a fitness function. The main goal of this dissertation is to study, develop and prove the benefits of evolutionary optimization algorithms in simultaneous localization and mapping for mobile robots in six degrees of freedom scenarios using LiDAR sensor information. This work introduces several evolutionary algorithms for scan matching, acknowledge a mixed fitness function for registration, solve simultaneous localization and matching in different scenarios, implements loop closure and error relaxation, and proves its performance at indoors, outdoors and underground mapping applications.Los robots móviles están creciendo en aplicaciones para moverse por entornos interiores y exteriores, pasando de aplicaciones teleoperadas a aplicaciones autónomas como explorar o navegar. Para que un robot se mueva a través de una ubicación en particular, necesita recopilar información sobre el escenario utilizando sensores. Estos sensores permiten que el robot observe, según el tipo de datos del sensor. Las cámaras en su mayoría brindan información en dos dimensiones, con colores y píxeles que representan una imagen. Los sensores de rango dan distancias desde el robot hasta los obstáculos. Las Cámaras de Profundidad mezclan ambas tecnologías para expandir su información a información tridimensional. Light Detection and Ranging (LiDAR) proporciona información sobre la distancia al sensor, pero amplía su rango a planos y tres dimensiones así como mejora la precisión. Por lo tanto, los robots móviles usan esos sensores para escanear el escenario mientras se mueven. Si el robot ya tiene un mapa, los sensores miden y el robot encuentra características que corresponden a características en dicho mapa para localizarse. La humanidad ha utilizado los mapas como una forma especializada de representar el medio ambiente durante más de 5000 años, convirtiéndose en una pieza de información importante en los usos básicos diarios de hoy en día. Los mapas se utilizan para navegar de un lugar a otro, localizar algo dentro de algunos límites o como una forma de documentación de características esenciales. Entonces, naturalmente, una forma intuitiva de hacer un robot móvil autónomo es implementar mapas de información geométrica para representar el entorno. Por otro lado, si el robot no tiene un mapa previo, deberá construirlo mientras se desplaza. El robot junta la información del sensor de distancias con la información del sensor del odómetro para lograr esta tarea de crear un mapa. Sin embargo, los sensores tienen sus propios defectos debido a la precisión, la calibración o la exactitud. Además, mover un robot tiene sus limitaciones físicas y fallas que pueden ocurrir aleatoriamente, como el desvío de las ruedas o una mala calibración mecánica que puede hacer que los contadores de desplazamiento fallen en la medición, lo que provoca una desalineación durante la construcción del mapa. A mediados de los años 90 se presentó una técnica novedosa para resolver este problema y superar la incertidumbre de los sensores mientras el robot construye el mapa, el algoritmo de localización y mapeo simultáneos (SLAM). Su objetivo es construir un mapa mientras se corrige la posición del robot en base a la información de dos o más escaneos consecutivos emparejados o encontrar el vector de correspondencia entre ellos. Este algoritmo ha sido ampliamente estudiado y desarrollado durante casi 25 años. No obstante, es muy relevante en innovaciones, modificaciones y adaptaciones debido a los avances en sensores y la complejidad de los escenarios en las aplicaciones emergentes de robótica móvil. El algoritmo de correspondencia de escaneo tiene como objetivo encontrar un vector de pose que represente la transformación o el movimiento entre dos observaciones del robot al encontrar el mejor valor posible después de resolver una ecuación que represente una buena transformación. Significa buscar una solución de forma óptima. Por lo general, este proceso de optimización se ha resuelto utilizando algoritmos de optimización clásicos, como el algoritmo de Newton o la resolución de formulaciones de gradientes y segundas derivadas, pero esto requiere una conjetura inicial o un estado inicial que ayude al algoritmo a apuntar en la dirección correcta, la mayoría de las veces obteniendo esta información de los sensores odometricos o sensores de inercia, aunque no siempre es posible tener o confiar en esta información, ya que algunos escenarios son complejos y los sensores fallan. Para resolver este problema, esta investigación presenta los usos de los algoritmos de optimización evolutiva, aquellos con una definición meta-heurística basada en la evolución iterativa que imita los procesos de optimización que no necesitan información previa para buscar dentro de un rango limitado el grupo de soluciones que resuelve una función de calidad. El objetivo principal de esta tesis es estudiar, desarrollar y probar los usos de algoritmos de optimización evolutiva en localización y mapeado simultáneos para robots móviles en escenarios de seis grados de libertad utilizando información de sensores LiDAR. Este trabajo introduce varios algoritmos evolutivos que resuelven la correspondencia entre medidas, soluciona el problema de SLAM, implementa una fusion de funciones objetivos y demuestra sus ventajas con pruebas en escenarios reales tanto en interiores, exteriores como mapeado de escenarios subterraneos.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Gerardo Fernández López.- Secretario: María Dolores Blanco Rojas.- Vocal: David Álvarez Sánche

    HIRO-NET.Heterogeneous intelligent robotic network for internet sharing in disaster scenarios

    Get PDF
    This article describes HIRO-NET, an Heterogeneous Intelligent Robotic Network. HIRO-NET is an emergency infrastructure-less network that aims to address the problem of providing connectivity in the immediate aftermath of a natural disaster, where no cellular or wide area network is operational and no Internet access is available. HIRO-NET establishes a two-tier wireless mesh network where the Lower Tier connects nearby survivors in a self-organized mesh via Bluetooth Low Energy (BLE) and the Upper Tier creates long-range VHF links between autonomous robots exploring the disaster stricken area. HIRO-NET’s main goal is to enable users in the disaster area to exchange text messages to share critical information and request help from first responders. The mesh network discovery problem is analyzed and a network protocol specifically designed to facilitate the exploration process is presented. We show how HIRO-NET robots successfully discover, bridge and interconnect local mesh networks. Results show that the Lower Tier always reaches network convergence and the Upper Tier can virtually extend HIRO-NET functionalities to the range of a small metropolitan area. In the event of an Internet connection still being available to some user, HIRO-NET is able to opportunistically share and provide access to low data-rate services (e.g., Twitter, Gmail) to the whole network. Results suggest that a temporary emergency network to cover a metropolitan area can be created in tens of minutes. Inde
    corecore