648 research outputs found

    Ground robotics in tunnels: Keys and lessons learned after 10 years of research and experiments

    Get PDF
    The work reported in this article describes the research advances and the lessons learned by the Robotics, Perception and Real-Time group over a decade of research in the field of ground robotics in confined environments. This study has primarily focused on localization, navigation, and communications in tunnel-like environments. As will be discussed, this type of environment presents several special characteristics that often make well-established techniques fail. The aim is to share, in an open way, the experience, errors, and successes of this group with the robotics community so that those that work in such environments can avoid (some of) the errors made. At the very least, these findings can be readily taken into account when designing a solution, without needing to sift through the technical details found in the papers cited within this text

    Comparison of Semi-autonomous Mobile Robot Control Strategies in Presence of Large Delay Fluctuation

    Get PDF
    We propose semi-autonomous control strategies to assist in the teleoperation of mobile robots under unstable communication conditions. A short-term autonomous control system is the assistance in the semi-autonomous control strategies, when the teleoperation is compromised. The short-term autonomous control comprises of lateral and longitudinal functions. The lateral control is based on an artificial potential field method where obstacles are repulsive, and a route is attractive. LiDAR-based artificial potential field methods are well studied. We present a novel artificial potential field method based on color and depth images. Benefit of a camera system compared to a LiDAR is that a camera detects color, is cheaper, and does not have moving parts. Moreover, utilization of active sensors is not desired in the particle accelerator environment. A set of experiments with a robot prototype are carried out to validate this system. The experiments are carried out in an environment which mimics the accelerator tunnel environment. The difficulty of the teleoperation is altered with obstacles. Fully manual and autonomous control are compared with the proposed semi-autonomous control strategies. The results show that the teleoperation is improved with autonomous, delay-dependent, and control-dependent assist compared to the fully manual control. Based on the operation time, control-dependent assist performed the best, reducing the time by 12% on the tunnel section with most obstacles. The presented system can be easily applied to common industrial robots operating e.g. in warehouses or factories due to hardware simplicity and light computational demand.Peer reviewe

    Mapping and Semantic Perception for Service Robotics

    Get PDF
    Para realizar una tarea, los robots deben ser capaces de ubicarse en el entorno. Si un robot no sabe dónde se encuentra, es imposible que sea capaz de desplazarse para alcanzar el objetivo de su tarea. La localización y construcción de mapas simultánea, llamado SLAM, es un problema estudiado en la literatura que ofrece una solución a este problema. El objetivo de esta tesis es desarrollar técnicas que permitan a un robot comprender el entorno mediante la incorporación de información semántica. Esta información también proporcionará una mejora en la localización y navegación de las plataformas robóticas. Además, también demostramos cómo un robot con capacidades limitadas puede construir de forma fiable y eficiente los mapas semánticos necesarios para realizar sus tareas cotidianas.El sistema de construcción de mapas presentado tiene las siguientes características: En el lado de la construcción de mapas proponemos la externalización de cálculos costosos a un servidor en nube. Además, proponemos métodos para registrar información semántica relevante con respecto a los mapas geométricos estimados. En cuanto a la reutilización de los mapas construidos, proponemos un método que combina la construcción de mapas con la navegación de un robot para explorar mejor un entorno y disponer de un mapa semántico con los objetos relevantes para una misión determinada.En primer lugar, desarrollamos un algoritmo semántico de SLAM visual que se fusiona los puntos estimados en el mapa, carentes de sentido, con objetos conocidos. Utilizamos un sistema monocular de SLAM basado en un EKF (Filtro Extendido de Kalman) centrado principalmente en la construcción de mapas geométricos compuestos únicamente por puntos o bordes; pero sin ningún significado o contenido semántico asociado. El mapa no anotado se construye utilizando sólo la información extraída de una secuencia de imágenes monoculares. La parte semántica o anotada del mapa -los objetos- se estiman utilizando la información de la secuencia de imágenes y los modelos de objetos precalculados. Como segundo paso, mejoramos el método de SLAM presentado anteriormente mediante el diseño y la implementación de un método distribuido. La optimización de mapas y el almacenamiento se realiza como un servicio en la nube, mientras que el cliente con poca necesidad de computo, se ejecuta en un equipo local ubicado en el robot y realiza el cálculo de la trayectoria de la cámara. Los ordenadores con los que está equipado el robot se liberan de la mayor parte de los cálculos y el único requisito adicional es una conexión a Internet.El siguiente paso es explotar la información semántica que somos capaces de generar para ver cómo mejorar la navegación de un robot. La contribución en esta tesis se centra en la detección 3D y en el diseño e implementación de un sistema de construcción de mapas semántico.A continuación, diseñamos e implementamos un sistema de SLAM visual capaz de funcionar con robustez en entornos poblados debido a que los robots de servicio trabajan en espacios compartidos con personas. El sistema presentado es capaz de enmascarar las zonas de imagen ocupadas por las personas, lo que aumenta la robustez, la reubicación, la precisión y la reutilización del mapa geométrico. Además, calcula la trayectoria completa de cada persona detectada con respecto al mapa global de la escena, independientemente de la ubicación de la cámara cuando la persona fue detectada.Por último, centramos nuestra investigación en aplicaciones de rescate y seguridad. Desplegamos un equipo de robots en entornos que plantean múltiples retos que implican la planificación de tareas, la planificación del movimiento, la localización y construcción de mapas, la navegación segura, la coordinación y las comunicaciones entre todos los robots. La arquitectura propuesta integra todas las funcionalidades mencionadas, asi como varios aspectos de investigación novedosos para lograr una exploración real, como son: localización basada en características semánticas-topológicas, planificación de despliegue en términos de las características semánticas aprendidas y reconocidas, y construcción de mapas.In order to perform a task, robots need to be able to locate themselves in the environment. If a robot does not know where it is, it is impossible for it to move, reach its goal and complete the task. Simultaneous Localization and Mapping, known as SLAM, is a problem extensively studied in the literature for enabling robots to locate themselves in unknown environments. The goal of this thesis is to develop and describe techniques to allow a service robot to understand the environment by incorporating semantic information. This information will also provide an improvement in the localization and navigation of robotic platforms. In addition, we also demonstrate how a simple robot can reliably and efficiently build the semantic maps needed to perform its quotidian tasks. The mapping system as built has the following features. On the map building side we propose the externalization of expensive computations to a cloud server. Additionally, we propose methods to register relevant semantic information with respect to the estimated geometrical maps. Regarding the reuse of the maps built, we propose a method that combines map building with robot navigation to better explore a room in order to obtain a semantic map with the relevant objects for a given mission. Firstly, we develop a semantic Visual SLAM algorithm that merges traditional with known objects in the estimated map. We use a monocular EKF (Extended Kalman Filter) SLAM system that has mainly been focused on producing geometric maps composed simply of points or edges but without any associated meaning or semantic content. The non-annotated map is built using only the information extracted from an image sequence. The semantic or annotated parts of the map –the objects– are estimated using the information in the image sequence and the precomputed object models. As a second step we improve the EKF SLAM presented previously by designing and implementing a visual SLAM system based on a distributed framework. The expensive map optimization and storage is allocated as a service in the Cloud, while a light camera tracking client runs on a local computer. The robot’s onboard computers are freed from most of the computation, the only extra requirement being an internet connection. The next step is to exploit the semantic information that we are able to generate to see how to improve the navigation of a robot. The contribution of this thesis is focused on 3D sensing which we use to design and implement a semantic mapping system. We then design and implement a visual SLAM system able to perform robustly in populated environments due to service robots work in environments where people are present. The system is able to mask the image regions occupied by people out of the rigid SLAM pipeline, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map. In addition, it estimates the full trajectory of each detected person with respect to the scene global map, irrespective of the location of the moving camera at the point when the people were imaged. Finally, we focus our research on rescue and security applications. The deployment of a multirobot team in confined environments poses multiple challenges that involve task planning, motion planning, localization and mapping, safe navigation, coordination and communications among all the robots. The architecture integrates, jointly with all the above-mentioned functionalities, several novel features to achieve real exploration: localization based on semantic-topological features, deployment planning in terms of the semantic features learned and recognized, and map building.<br /

    Hydrolink 2021/2. Artificial Intelligence

    Get PDF
    Topic: Artificial Intelligenc

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach

    Formation-Based Odour Source Localisation Using Distributed Terrestrial and Marine Robotic Systems

    Get PDF
    This thesis tackles the problem of robotic odour source localisation, that is, the use of robots to find the source of a chemical release. As the odour travels away from the source, in the form of a plume carried by the wind or current, small scale turbulence causes it to separate into intermittent patches, suppressing any gradients and making this a particularly challenging search problem. We focus on distributed strategies for odour plume tracing in the air and in the water and look primarily at 2D scenarios, although novel results are also presented for 3D tracing. The common thread to our work is the use of multiple robots in formation, each outfitted with odour and flow sensing devices. By having more than one robot, we can gather observations at different locations, thus helping overcome the difficulties posed by the patchiness of the odour concentration. The flow (wind or current) direction is used to orient the formation and move the robots up-flow, while the measured concentrations are used to centre the robots in the plume and scale the formation to trace its limits. We propose two formation keeping methods. For terrestrial and surface robots equipped with relative or absolute positioning capabilities, we employ a graph-based formation controller using the well-known principle of Laplacian feedback. For underwater vehicles lacking such capabilities, we introduce an original controller for a leader-follower triangular formation using acoustic modems with ranging capabilities. The methods we propose underwent extensive experimental evaluation in high-fidelity simulations and real-world trials. The marine formation controller was implemented in MEDUSA autonomous vehicles and found to maintain a stable formation despite the multi-second ranging period. The airborne plume tracing algorithm was tested using compact Khepera robots in a wind tunnel, yielding low distance overheads and reduced tracing error. A combined approach for marine plume tracing was evaluated in simulation with promising results

    Formation-Based Odour Source Localisation Using Distributed Terrestrial and Marine Robotic Systems

    Get PDF
    This thesis tackles the problem of robotic odour source localisation, that is, the use of robots to find the source of a chemical release. As the odour travels away from the source, in the form of a plume carried by the wind or current, small scale turbulence causes it to separate into intermittent patches, suppressing any gradients and making this a particularly challenging search problem. We focus on distributed strategies for odour plume tracing in the air and in the water and look primarily at 2D scenarios, although novel results are also presented for 3D tracing. The common thread to our work is the use of multiple robots in formation, each outfitted with odour and flow sensing devices. By having more than one robot, we can gather observations at different locations, thus helping overcome the difficulties posed by the patchiness of the odour concentration. The flow (wind or current) direction is used to orient the formation and move the robots up-flow, while the measured concentrations are used to centre the robots in the plume and scale the formation to trace its limits. We propose two formation keeping methods. For terrestrial and surface robots equipped with relative or absolute positioning capabilities, we employ a graph-based formation controller using the well-known principle of Laplacian feedback. For underwater vehicles lacking such capabilities, we introduce an original controller for a leader-follower triangular formation using acoustic modems with ranging capabilities. The methods we propose underwent extensive experimental evaluation in high-fidelity simulations and real-world trials. The marine formation controller was implemented in MEDUSA autonomous vehicles and found to maintain a stable formation despite the multi-second ranging period. The airborne plume tracing algorithm was tested using compact Khepera robots in a wind tunnel, yielding low distance overheads and reduced tracing error. A combined approach for marine plume tracing was evaluated in simulation with promising results
    corecore