1,388 research outputs found
UAV/UGV Autonomous Cooperation: UAV Assists UGV to Climb a Cliff by Attaching a Tether
This paper proposes a novel cooperative system for an Unmanned Aerial Vehicle
(UAV) and an Unmanned Ground Vehicle (UGV) which utilizes the UAV not only as a
flying sensor but also as a tether attachment device. Two robots are connected
with a tether, allowing the UAV to anchor the tether to a structure located at
the top of a steep terrain, impossible to reach for UGVs. Thus, enhancing the
poor traversability of the UGV by not only providing a wider range of scanning
and mapping from the air, but also by allowing the UGV to climb steep terrains
with the winding of the tether. In addition, we present an autonomous framework
for the collaborative navigation and tether attachment in an unknown
environment. The UAV employs visual inertial navigation with 3D voxel mapping
and obstacle avoidance planning. The UGV makes use of the voxel map and
generates an elevation map to execute path planning based on a traversability
analysis. Furthermore, we compared the pros and cons of possible methods for
the tether anchoring from multiple points of view. To increase the probability
of successful anchoring, we evaluated the anchoring strategy with an
experiment. Finally, the feasibility and capability of our proposed system were
demonstrated by an autonomous mission experiment in the field with an obstacle
and a cliff.Comment: 7 pages, 8 figures, accepted to 2019 International Conference on
Robotics & Automation. Video: https://youtu.be/UzTT8Ckjz1
Mapping and Semantic Perception for Service Robotics
Para realizar una tarea, los robots deben ser capaces de ubicarse en el entorno. Si un robot no sabe dónde se encuentra, es imposible que sea capaz de desplazarse para alcanzar el objetivo de su tarea. La localización y construcción de mapas simultánea, llamado SLAM, es un problema estudiado en la literatura que ofrece una solución a este problema. El objetivo de esta tesis es desarrollar técnicas que permitan a un robot comprender el entorno mediante la incorporación de información semántica. Esta información también proporcionará una mejora en la localización y navegación de las plataformas robóticas. Además, también demostramos cómo un robot con capacidades limitadas puede construir de forma fiable y eficiente los mapas semánticos necesarios para realizar sus tareas cotidianas.El sistema de construcción de mapas presentado tiene las siguientes características: En el lado de la construcción de mapas proponemos la externalización de cálculos costosos a un servidor en nube. Además, proponemos métodos para registrar información semántica relevante con respecto a los mapas geométricos estimados. En cuanto a la reutilización de los mapas construidos, proponemos un método que combina la construcción de mapas con la navegación de un robot para explorar mejor un entorno y disponer de un mapa semántico con los objetos relevantes para una misión determinada.En primer lugar, desarrollamos un algoritmo semántico de SLAM visual que se fusiona los puntos estimados en el mapa, carentes de sentido, con objetos conocidos. Utilizamos un sistema monocular de SLAM basado en un EKF (Filtro Extendido de Kalman) centrado principalmente en la construcción de mapas geométricos compuestos únicamente por puntos o bordes; pero sin ningún significado o contenido semántico asociado. El mapa no anotado se construye utilizando sólo la información extraída de una secuencia de imágenes monoculares. La parte semántica o anotada del mapa -los objetos- se estiman utilizando la información de la secuencia de imágenes y los modelos de objetos precalculados. Como segundo paso, mejoramos el método de SLAM presentado anteriormente mediante el diseño y la implementación de un método distribuido. La optimización de mapas y el almacenamiento se realiza como un servicio en la nube, mientras que el cliente con poca necesidad de computo, se ejecuta en un equipo local ubicado en el robot y realiza el cálculo de la trayectoria de la cámara. Los ordenadores con los que está equipado el robot se liberan de la mayor parte de los cálculos y el único requisito adicional es una conexión a Internet.El siguiente paso es explotar la información semántica que somos capaces de generar para ver cómo mejorar la navegación de un robot. La contribución en esta tesis se centra en la detección 3D y en el diseño e implementación de un sistema de construcción de mapas semántico.A continuación, diseñamos e implementamos un sistema de SLAM visual capaz de funcionar con robustez en entornos poblados debido a que los robots de servicio trabajan en espacios compartidos con personas. El sistema presentado es capaz de enmascarar las zonas de imagen ocupadas por las personas, lo que aumenta la robustez, la reubicación, la precisión y la reutilización del mapa geométrico. Además, calcula la trayectoria completa de cada persona detectada con respecto al mapa global de la escena, independientemente de la ubicación de la cámara cuando la persona fue detectada.Por último, centramos nuestra investigación en aplicaciones de rescate y seguridad. Desplegamos un equipo de robots en entornos que plantean múltiples retos que implican la planificación de tareas, la planificación del movimiento, la localización y construcción de mapas, la navegación segura, la coordinación y las comunicaciones entre todos los robots. La arquitectura propuesta integra todas las funcionalidades mencionadas, asi como varios aspectos de investigación novedosos para lograr una exploración real, como son: localización basada en características semánticas-topológicas, planificación de despliegue en términos de las características semánticas aprendidas y reconocidas, y construcción de mapas.In order to perform a task, robots need to be able to locate themselves in the environment. If a robot does not know where it is, it is impossible for it to move, reach its goal and complete the task. Simultaneous Localization and Mapping, known as SLAM, is a problem extensively studied in the literature for enabling robots to locate themselves in unknown environments. The goal of this thesis is to develop and describe techniques to allow a service robot to understand the environment by incorporating semantic information. This information will also provide an improvement in the localization and navigation of robotic platforms. In addition, we also demonstrate how a simple robot can reliably and efficiently build the semantic maps needed to perform its quotidian tasks. The mapping system as built has the following features. On the map building side we propose the externalization of expensive computations to a cloud server. Additionally, we propose methods to register relevant semantic information with respect to the estimated geometrical maps. Regarding the reuse of the maps built, we propose a method that combines map building with robot navigation to better explore a room in order to obtain a semantic map with the relevant objects for a given mission. Firstly, we develop a semantic Visual SLAM algorithm that merges traditional with known objects in the estimated map. We use a monocular EKF (Extended Kalman Filter) SLAM system that has mainly been focused on producing geometric maps composed simply of points or edges but without any associated meaning or semantic content. The non-annotated map is built using only the information extracted from an image sequence. The semantic or annotated parts of the map –the objects– are estimated using the information in the image sequence and the precomputed object models. As a second step we improve the EKF SLAM presented previously by designing and implementing a visual SLAM system based on a distributed framework. The expensive map optimization and storage is allocated as a service in the Cloud, while a light camera tracking client runs on a local computer. The robot’s onboard computers are freed from most of the computation, the only extra requirement being an internet connection. The next step is to exploit the semantic information that we are able to generate to see how to improve the navigation of a robot. The contribution of this thesis is focused on 3D sensing which we use to design and implement a semantic mapping system. We then design and implement a visual SLAM system able to perform robustly in populated environments due to service robots work in environments where people are present. The system is able to mask the image regions occupied by people out of the rigid SLAM pipeline, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map. In addition, it estimates the full trajectory of each detected person with respect to the scene global map, irrespective of the location of the moving camera at the point when the people were imaged. Finally, we focus our research on rescue and security applications. The deployment of a multirobot team in confined environments poses multiple challenges that involve task planning, motion planning, localization and mapping, safe navigation, coordination and communications among all the robots. The architecture integrates, jointly with all the above-mentioned functionalities, several novel features to achieve real exploration: localization based on semantic-topological features, deployment planning in terms of the semantic features learned and recognized, and map building.<br /
Design, Construction, Energy Modeling, and Navigation of a Six-Wheeled Differential Drive Robot to Deliver Medical Supplies inside Hospitals
Differential drive mobile robots have been the most ubiquitous kind of robots for the last few decades. As each of the wheels of a differential drive mobile robot can be controlled, it provides additional flexibility to the end-users in creating new applications. These applications include personal assistance, security, warehouse and distribution applications, ocean and space exploration, etc. In a clinic or hospital, the delivery of medicines and patients’ records are frequently needed activities. Medical personnel often find these activities repetitive and time-consuming. Our research was to design, construct, produce an energy model, and develop a navigation control method for a six-wheeled differential drive robot designed to deliver medical supplies inside the hospital. Such a robot is expected to lessen the workload of medical staff. Therefore, the design and implementation of a six-wheeled differential drive robot with a password-protected medicine carrier were presented. This password-protected medicine carrier ensures that only the authorized medical personnel can receive medical supplies. The low-cost robot base and the medicine carrier were built in real life. Besides the actual robot design and fabrication, a kinematic model for the robot was developed, and a navigation control algorithm to avoid obstacles was implemented using MATLAB/Simulink. The kinematic modeling is helpful for the robot to achieve better energy optimization. To develop the object avoidance algorithm, we investigated the use of the Robot Operating System (ROS) and the Simultaneous Localization and Mapping (SLAM) algorithm for the implementation of the mapping and navigation of a robotic platform named TurtleBot 2. Finally, using the Webot robot simulator, the navigation of the six-wheeled mobile robot was demonstrated in a hospital-like simulation environment
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
Towards the simulation of cooperative perception applications by leveraging distributed sensing infrastructures
With the rapid development of Automated Vehicles (AV), the boundaries of their function alities are being pushed and new challenges are being imposed. In increasingly complex
and dynamic environments, it is fundamental to rely on more powerful onboard sensors and
usually AI. However, there are limitations to this approach. As AVs are increasingly being
integrated in several industries, expectations regarding their cooperation ability is growing,
and vehicle-centric approaches to sensing and reasoning, become hard to integrate. The
proposed approach is to extend perception to the environment, i.e. outside of the vehicle,
by making it smarter, via the deployment of wireless sensors and actuators. This will vastly
improve the perception capabilities in dynamic and unpredictable scenarios and often in a
cheaper way, relying mostly in the use of lower cost sensors and embedded devices, which rely
on their scale deployment instead of centralized sensing abilities. Consequently, to support
the development and deployment of such cooperation actions in a seamless way, we require
the usage of co-simulation frameworks, that can encompass multiple perspectives of control
and communications for the AVs, the wireless sensors and actuators and other actors in the
environment. In this work, we rely on ROS2 and micro-ROS as the underlying technologies
for integrating several simulation tools, to construct a framework, capable of supporting the
development, test and validation of such smart, cooperative environments. This endeavor
was undertaken by building upon an existing simulation framework known as AuNa. We
extended its capabilities to facilitate the simulation of cooperative scenarios by incorporat ing external sensors placed within the environment rather than just relying on vehicle-based
sensors. Moreover, we devised a cooperative perception approach within this framework,
showcasing its substantial potential and effectiveness. This will enable the demonstration of
multiple cooperation scenarios and also ease the deployment phase by relying on the same
software architecture.Com o rápido desenvolvimento dos Veículos Autónomos (AV), os limites das suas funcional idades estão a ser alcançados e novos desafios estão a surgir. Em ambientes complexos
e dinâmicos, é fundamental a utilização de sensores de alta capacidade e, na maioria dos
casos, inteligência artificial. Mas existem limitações nesta abordagem. Como os AVs estão
a ser integrados em várias indústrias, as expectativas quanto à sua capacidade de cooperação estão a aumentar, e as abordagens de perceção e raciocínio centradas no veículo,
tornam-se difíceis de integrar. A abordagem proposta consiste em extender a perceção para
o ambiente, isto é, fora do veículo, tornando-a inteligente, através do uso de sensores e
atuadores wireless. Isto irá melhorar as capacidades de perceção em cenários dinâmicos e
imprevisíveis, reduzindo o custo, pois a abordagem será baseada no uso de sensores low-cost
e sistemas embebidos, que dependem da sua implementação em grande escala em vez da
capacidade de perceção centralizada. Consequentemente, para apoiar o desenvolvimento
e implementação destas ações em cooperação, é necessária a utilização de frameworks de
co-simulação, que abranjam múltiplas perspetivas de controlo e comunicação para os AVs,
sensores e atuadores wireless, e outros atores no ambiente. Neste trabalho será utilizado
ROS2 e micro-ROS como as tecnologias subjacentes para a integração das ferramentas de
simulação, de modo a construir uma framework capaz de apoiar o desenvolvimento, teste e
validação de ambientes inteligentes e cooperativos. Esta tarefa foi realizada com base numa
framework de simulação denominada AuNa. Foram expandidas as suas capacidades para
facilitar a simulação de cenários cooperativos através da incorporação de sensores externos
colocados no ambiente, em vez de depender apenas de sensores montados nos veículos.
Além disso, concebemos uma abordagem de perceção cooperativa usando a framework,
demonstrando o seu potencial e eficácia. Isto irá permitir a demonstração de múltiplos
cenários de cooperação e também facilitar a fase de implementação, utilizando a mesma
arquitetura de software
Active SLAM: A Review On Last Decade
This article presents a comprehensive review of the Active Simultaneous
Localization and Mapping (A-SLAM) research conducted over the past decade. It
explores the formulation, applications, and methodologies employed in A-SLAM,
particularly in trajectory generation and control-action selection, drawing on
concepts from Information Theory (IT) and the Theory of Optimal Experimental
Design (TOED). This review includes both qualitative and quantitative analyses
of various approaches, deployment scenarios, configurations, path-planning
methods, and utility functions within A-SLAM research. Furthermore, this
article introduces a novel analysis of Active Collaborative SLAM (AC-SLAM),
focusing on collaborative aspects within SLAM systems. It includes a thorough
examination of collaborative parameters and approaches, supported by both
qualitative and statistical assessments. This study also identifies limitations
in the existing literature and suggests potential avenues for future research.
This survey serves as a valuable resource for researchers seeking insights into
A-SLAM methods and techniques, offering a current overview of A-SLAM
formulation.Comment: 34 pages, 8 figures, 6 table
- …