177 research outputs found

    Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot

    Full text link
    Mobile manipulation tasks are one of the key challenges in the field of search and rescue (SAR) robotics requiring robots with flexible locomotion and manipulation abilities. Since the tasks are mostly unknown in advance, the robot has to adapt to a wide variety of terrains and workspaces during a mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and an anthropomorphic upper body to carry out complex tasks in environments too dangerous for humans. Due to its high number of degrees of freedom, controlling the robot with direct teleoperation approaches is challenging and exhausting. Supervised autonomy approaches are promising to increase quality and speed of control while keeping the flexibility to solve unknown tasks. We developed a set of operator assistance functionalities with different levels of autonomy to control the robot for challenging locomotion and manipulation tasks. The integrated system was evaluated in disaster response scenarios and showed promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 201

    Cognitive Task Planning for Smart Industrial Robots

    Get PDF
    This research work presents a novel Cognitive Task Planning framework for Smart Industrial Robots. The framework makes an industrial mobile manipulator robot Cognitive by applying Semantic Web Technologies. It also introduces a novel Navigation Among Movable Obstacles algorithm for robots navigating and manipulating inside a firm. The objective of Industrie 4.0 is the creation of Smart Factories: modular firms provided with cyber-physical systems able to strong customize products under the condition of highly flexible mass-production. Such systems should real-time communicate and cooperate with each other and with humans via the Internet of Things. They should intelligently adapt to the changing surroundings and autonomously navigate inside a firm while moving obstacles that occlude free paths, even if seen for the first time. At the end, in order to accomplish all these tasks while being efficient, they should learn from their actions and from that of other agents. Most of existing industrial mobile robots navigate along pre-generated trajectories. They follow ectrified wires embedded in the ground or lines painted on th efloor. When there is no expectation of environment changes and cycle times are critical, this planning is functional. When workspaces and tasks change frequently, it is better to plan dynamically: robots should autonomously navigate without relying on modifications of their environments. Consider the human behavior: humans reason about the environment and consider the possibility of moving obstacles if a certain goal cannot be reached or if moving objects may significantly shorten the path to it. This problem is named Navigation Among Movable Obstacles and is mostly known in rescue robotics. This work transposes the problem on an industrial scenario and tries to deal with its two challenges: the high dimensionality of the state space and the treatment of uncertainty. The proposed NAMO algorithm aims to focus exploration on less explored areas. For this reason it extends the Kinodynamic Motion Planning by Interior-Exterior Cell Exploration algorithm. The extension does not impose obstacles avoidance: it assigns an importance to each cell by combining the efforts necessary to reach it and that needed to free it from obstacles. The obtained algorithm is scalable because of its independence from the size of the map and from the number, shape, and pose of obstacles. It does not impose restrictions on actions to be performed: the robot can both push and grasp every object. Currently, the algorithm assumes full world knowledge but the environment is reconfigurable and the algorithm can be easily extended in order to solve NAMO problems in unknown environments. The algorithm handles sensor feedbacks and corrects uncertainties. Usually Robotics separates Motion Planning and Manipulation problems. NAMO forces their combined processing by introducing the need of manipulating multiple objects, often unknown, while navigating. Adopting standard precomputed grasps is not sufficient to deal with the big amount of existing different objects. A Semantic Knowledge Framework is proposed in support of the proposed algorithm by giving robots the ability to learn to manipulate objects and disseminate the information gained during the fulfillment of tasks. The Framework is composed by an Ontology and an Engine. The Ontology extends the IEEE Standard Ontologies for Robotics and Automation and contains descriptions of learned manipulation tasks and detected objects. It is accessible from any robot connected to the Cloud. It can be considered a data store for the efficient and reliable execution of repetitive tasks; and a Web-based repository for the exchange of information between robots and for the speed up of the learning phase. No other manipulation ontology exists respecting the IEEE Standard and, regardless the standard, the proposed ontology differs from the existing ones because of the type of features saved and the efficient way in which they can be accessed: through a super fast Cascade Hashing algorithm. The Engine lets compute and store the manipulation actions when not present in the Ontology. It is based on Reinforcement Learning techniques that avoid massive trainings on large-scale databases and favors human-robot interactions. The overall system is flexible and easily adaptable to different robots operating in different industrial environments. It is characterized by a modular structure where each software block is completely reusable. Every block is based on the open-source Robot Operating System. Not all industrial robot controllers are designed to be ROS-compliant. This thesis presents the method adopted during this research in order to Open Industrial Robot Controllers and create a ROS-Industrial interface for them

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Fuzzy optimisation based symbolic grounding for service robots

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySymbolic grounding is a bridge between task level planning and actual robot sensing and actuation. Uncertainties raised by unstructured environments make a bottleneck for integrating traditional artificial intelligence with service robotics. In this research, a fuzzy optimisation based symbolic grounding approach is presented. This approach can handle uncertainties and helps service robots to determine the most comfortable base region for grasping objects in a fetch and carry task. Novel techniques are applied to establish fuzzy objective function, to model fuzzy constraints and to perform fuzzy optimisation. The approach does not have the short comings of others’ work and the computation time is dramatically reduced in compare with other methods. The advantages of the proposed fuzzy optimisation based approach are evidenced by experiments that were undertaken in Care-O-bot 3 (COB 3) and Robot Operating System (ROS) platforms

    High-level environment representations for mobile robots

    Get PDF
    In most robotic applications we are faced with the problem of building a digital representation of the environment that allows the robot to autonomously complete its tasks. This internal representation can be used by the robot to plan a motion trajectory for its mobile base and/or end-effector. For most man-made environments we do not have a digital representation or it is inaccurate. Thus, the robot must have the capability of building it autonomously. This is done by integrating into an internal data structure incoming sensor measurements. For this purpose, a common solution consists in solving the Simultaneous Localization and Mapping (SLAM) problem. The map obtained by solving a SLAM problem is called ``metric'' and it describes the geometric structure of the environment. A metric map is typically made up of low-level primitives (like points or voxels). This means that even though it represents the shape of the objects in the robot workspace it lacks the information of which object a surface belongs to. Having an object-level representation of the environment has the advantage of augmenting the set of possible tasks that a robot may accomplish. To this end, in this thesis we focus on two aspects. We propose a formalism to represent in a uniform manner 3D scenes consisting of different geometric primitives, including points, lines and planes. Consequently, we derive a local registration and a global optimization algorithm that can exploit this representation for robust estimation. Furthermore, we present a Semantic Mapping system capable of building an \textit{object-based} map that can be used for complex task planning and execution. Our system exploits effective reconstruction and recognition techniques that require no a-priori information about the environment and can be used under general conditions

    Vegetation detection and terrain classification for autonomous navigation

    Get PDF
    Diese Arbeit beleuchtet sieben neuartige Ansätze aus zwei Bereichen der maschinellen Wahrnehmung: Erkennung von Vegetation und Klassifizierung von Gelände. Diese Elemente bilden den Kern eines jeden Steuerungssystems für effiziente, autonome Navigation im Außenbereich. Bezüglich der Vegetationserkennung, wird zuerst ein auf Indizierung basierender Ansatz beschrieben (1), der die reflektierenden und absorbierenden Eigenschaften von Pflanzen im Bezug auf sichtbares und nah-infrarotes Licht auswertet. Zweitens wird eine Fusionmethode von 2D/3D Merkmalen untersucht (2), die das menschliche System der Vegetationserkennung nachbildet. Zusätzlich wird ein integriertes System vorgeschlagen (3), welches die visuelle Wahrnehmung mit multi-spektralen Methoden ko mbiniert. Aufbauend auf detaillierten Studien zu Farb- und Textureigenschaften von Vegetation wird ein adaptiver selbstlernender Algorithmus eingeführt der robust und schnell Pflanzen(bewuchs) erkennt (4). Komplettiert wird die Vegetationserkennung durch einen Algorithmus zur Befahrbarkeitseinschätzung von Vegetation, der die Verformbarkeit von Pflanzen erkennt. Je leichter sich Pflanzen bewegen lassen, umso größer ist ihre Befahrbarkeit. Bezüglich der Geländeklassifizierung wird eine struktur-basierte Methode vorgestellt (6), welche die 3D Strukturdaten einer Umgebung durch die statistische Analyse lokaler Punkte von LiDAR Daten unterstützt. Zuletzt wird eine auf Klassifizierung basierende Methode (7) beschrieben, die LiDAR und Kamera-Daten kombiniert, um eine 3D Szene zu rekonstruieren. Basierend auf den Vorteilen der vorgestellten Algorithmen im Bezug auf die maschinelle Wahrnehmung, hoffen wir, dass diese Arbeit als Ausgangspunkt für weitere Entwicklung en von zuverlässigen Erkennungsmethoden dient.This thesis introduces seven novel contributions for two perception tasks: vegetation detection and terrain classification, that are at the core of any control system for efficient autonomous navigation in outdoor environments. Regarding vegetation detection, we first describe a vegetation index-based method (1), which relies on the absorption and reflectance properties of vegetation to visual light and near-infrared light, respectively. Second, a 2D/3D feature fusion (2), which imitates the human visual system in vegetation interpretation, is investigated. Alternatively, an integrated vision system (3) is proposed to realise our greedy ambition in combining visual perception-based and multi-spectral methods by only using a unit device. A depth study on colour and texture features of vegetation has been carried out, which leads to a robust and fast vegetation detection through an adaptive learning algorithm (4). In addition, a double-check of passable vegetation detection (5) is realised, relying on the compressibility of vegetation. The lower degree of resistance vegetation has, the more traversable it is. Regarding terrain classification, we introduce a structure-based method (6) to capture the world scene by inferring its 3D structures through a local point statistic analysis on LiDAR data. Finally, a classification-based method (7), which combines the LiDAR data and visual information to reconstruct 3D scenes, is presented. Whereby, object representation is described more details, thus enabling an ability to classify more object types. Based on the success of the proposed perceptual inference methods in the environmental sensing tasks, we hope that this thesis will really serve as a key point for further development of highly reliable perceptual inference methods

    Mapping and Semantic Perception for Service Robotics

    Get PDF
    Para realizar una tarea, los robots deben ser capaces de ubicarse en el entorno. Si un robot no sabe dónde se encuentra, es imposible que sea capaz de desplazarse para alcanzar el objetivo de su tarea. La localización y construcción de mapas simultánea, llamado SLAM, es un problema estudiado en la literatura que ofrece una solución a este problema. El objetivo de esta tesis es desarrollar técnicas que permitan a un robot comprender el entorno mediante la incorporación de información semántica. Esta información también proporcionará una mejora en la localización y navegación de las plataformas robóticas. Además, también demostramos cómo un robot con capacidades limitadas puede construir de forma fiable y eficiente los mapas semánticos necesarios para realizar sus tareas cotidianas.El sistema de construcción de mapas presentado tiene las siguientes características: En el lado de la construcción de mapas proponemos la externalización de cálculos costosos a un servidor en nube. Además, proponemos métodos para registrar información semántica relevante con respecto a los mapas geométricos estimados. En cuanto a la reutilización de los mapas construidos, proponemos un método que combina la construcción de mapas con la navegación de un robot para explorar mejor un entorno y disponer de un mapa semántico con los objetos relevantes para una misión determinada.En primer lugar, desarrollamos un algoritmo semántico de SLAM visual que se fusiona los puntos estimados en el mapa, carentes de sentido, con objetos conocidos. Utilizamos un sistema monocular de SLAM basado en un EKF (Filtro Extendido de Kalman) centrado principalmente en la construcción de mapas geométricos compuestos únicamente por puntos o bordes; pero sin ningún significado o contenido semántico asociado. El mapa no anotado se construye utilizando sólo la información extraída de una secuencia de imágenes monoculares. La parte semántica o anotada del mapa -los objetos- se estiman utilizando la información de la secuencia de imágenes y los modelos de objetos precalculados. Como segundo paso, mejoramos el método de SLAM presentado anteriormente mediante el diseño y la implementación de un método distribuido. La optimización de mapas y el almacenamiento se realiza como un servicio en la nube, mientras que el cliente con poca necesidad de computo, se ejecuta en un equipo local ubicado en el robot y realiza el cálculo de la trayectoria de la cámara. Los ordenadores con los que está equipado el robot se liberan de la mayor parte de los cálculos y el único requisito adicional es una conexión a Internet.El siguiente paso es explotar la información semántica que somos capaces de generar para ver cómo mejorar la navegación de un robot. La contribución en esta tesis se centra en la detección 3D y en el diseño e implementación de un sistema de construcción de mapas semántico.A continuación, diseñamos e implementamos un sistema de SLAM visual capaz de funcionar con robustez en entornos poblados debido a que los robots de servicio trabajan en espacios compartidos con personas. El sistema presentado es capaz de enmascarar las zonas de imagen ocupadas por las personas, lo que aumenta la robustez, la reubicación, la precisión y la reutilización del mapa geométrico. Además, calcula la trayectoria completa de cada persona detectada con respecto al mapa global de la escena, independientemente de la ubicación de la cámara cuando la persona fue detectada.Por último, centramos nuestra investigación en aplicaciones de rescate y seguridad. Desplegamos un equipo de robots en entornos que plantean múltiples retos que implican la planificación de tareas, la planificación del movimiento, la localización y construcción de mapas, la navegación segura, la coordinación y las comunicaciones entre todos los robots. La arquitectura propuesta integra todas las funcionalidades mencionadas, asi como varios aspectos de investigación novedosos para lograr una exploración real, como son: localización basada en características semánticas-topológicas, planificación de despliegue en términos de las características semánticas aprendidas y reconocidas, y construcción de mapas.In order to perform a task, robots need to be able to locate themselves in the environment. If a robot does not know where it is, it is impossible for it to move, reach its goal and complete the task. Simultaneous Localization and Mapping, known as SLAM, is a problem extensively studied in the literature for enabling robots to locate themselves in unknown environments. The goal of this thesis is to develop and describe techniques to allow a service robot to understand the environment by incorporating semantic information. This information will also provide an improvement in the localization and navigation of robotic platforms. In addition, we also demonstrate how a simple robot can reliably and efficiently build the semantic maps needed to perform its quotidian tasks. The mapping system as built has the following features. On the map building side we propose the externalization of expensive computations to a cloud server. Additionally, we propose methods to register relevant semantic information with respect to the estimated geometrical maps. Regarding the reuse of the maps built, we propose a method that combines map building with robot navigation to better explore a room in order to obtain a semantic map with the relevant objects for a given mission. Firstly, we develop a semantic Visual SLAM algorithm that merges traditional with known objects in the estimated map. We use a monocular EKF (Extended Kalman Filter) SLAM system that has mainly been focused on producing geometric maps composed simply of points or edges but without any associated meaning or semantic content. The non-annotated map is built using only the information extracted from an image sequence. The semantic or annotated parts of the map –the objects– are estimated using the information in the image sequence and the precomputed object models. As a second step we improve the EKF SLAM presented previously by designing and implementing a visual SLAM system based on a distributed framework. The expensive map optimization and storage is allocated as a service in the Cloud, while a light camera tracking client runs on a local computer. The robot’s onboard computers are freed from most of the computation, the only extra requirement being an internet connection. The next step is to exploit the semantic information that we are able to generate to see how to improve the navigation of a robot. The contribution of this thesis is focused on 3D sensing which we use to design and implement a semantic mapping system. We then design and implement a visual SLAM system able to perform robustly in populated environments due to service robots work in environments where people are present. The system is able to mask the image regions occupied by people out of the rigid SLAM pipeline, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map. In addition, it estimates the full trajectory of each detected person with respect to the scene global map, irrespective of the location of the moving camera at the point when the people were imaged. Finally, we focus our research on rescue and security applications. The deployment of a multirobot team in confined environments poses multiple challenges that involve task planning, motion planning, localization and mapping, safe navigation, coordination and communications among all the robots. The architecture integrates, jointly with all the above-mentioned functionalities, several novel features to achieve real exploration: localization based on semantic-topological features, deployment planning in terms of the semantic features learned and recognized, and map building.<br /

    A Haptic Shared-Control Architecture for Guided Multi-Target Robotic Grasping

    Get PDF
    Although robotic telemanipulation has always been a key technology for the nuclear industry, little advancement has been seen over the last decades. Despite complex remote handling requirements, simple mechanically linked master-slave manipulators still dominate the field. Nonetheless, there is a pressing need for more effective robotic solutions able to significantly speed up the decommissioning of legacy radioactive waste. This paper describes a novel haptic shared-control approach for assisting a human operator in the sort and segregation of different objects in a cluttered and unknown environment. A three-dimensional scan of the scene is used to generate a set of potential grasp candidates on the objects at hand. These grasp candidates are then used to generate guiding haptic cues, which assist the operator in approaching and grasping the objects. The haptic feedback is designed to be smooth and continuous as the user switches from a grasp candidate to the next one, or from one object to another one, avoiding any discontinuity or abrupt changes. To validate our approach, we carried out two human-subject studies, enrolling 15 participants. We registered an average improvement of 20.8%, 20.1%, and 32.5% in terms of completion time, linear trajectory, and perceived effectiveness, respectively, between the proposed approach and standard teleoperation

    Planning and estimation algorithms for human-like grasping

    Get PDF
    Mención Internacional en el título de doctorThe use of robots in human-like environments requires them to be able to sense and model unstructured scenarios. Thus, their success will depend on their versatility for interacting with the surroundings. This interaction often includes manipulation of objects for accomplishing common daily tasks. Therefore, robots need to sense, understand, plan and perform; and this has to be a continuous loop. This thesis presents a framework which covers most of the phases encountered in a common manipulation pipeline. First, it is shown how to use the Fast Marching Squared algorithm and a leader-followers strategy to control a formation of robots, simplifying a high dimensional path-planning problem. This approach is evaluated with simulations in complex environments in which the formation control technique is applied. Results are evaluated in terms of distance to obstacles (safety) and the needed deformation. Then, a framework to perform the grasping action is presented. The necessary techniques for environment modelling and grasp synthesis and path planning and control are presented. For the motion planning part, the formation concept from the previous chapter is recycled. This technique is applied to the planning and control of the movement of a complex hand-arm system. Tests using robot Manfred show the possibilities of the framework when performing in real scenarios. Finally, under the assumption that the grasping actions may not always result as it was previously planned, a Bayesian-based state-estimation process is introduced to estimate the final in-hand object pose after a grasping action is done, based on the measurements of proprioceptive and tactile sensors. This approach is evaluated in real experiments with Reex Takktile hand. Results show good performance in general terms, while suggest the need of a vision system for a more precise outcome.La investigación en robótica avanza con la intención de evolucionar hacia el uso de los robots en entornos humanos. A día de hoy, su uso está prácticamente limitado a las fábricas, donde trabajan en entornos controlados realizando tareas repetitivas. Sin embargo, estos robots son incapaces de reaccionar antes los más mínimos cambios en el entorno o en la tarea a realizar. En el grupo de investigación del Roboticslab se ha construido un manipulador móvil, llamado Manfred, en el transcurso de los últimos 15 años. Su objetivo es conseguir realizar tareas de navegación y manipulación en entornos diseñados para seres humanos. Para las tareas de manipulación y agarre, se ha adquirido recientemente una mano robótica diseñada en la universidad de Gifu, Japón. Sin embargo, al comienzo de esta tesis, no se había realzado ningún trabajo destinado a la manipulación o el agarre de objetos. Por lo tanto, existe una motivación clara para investigar en este campo y ampliar las capacidades del robot, aspectos tratados en esta tesis. La primera parte de la tesis muestra la aplicación de un sistema de control de formaciones de robots en 3 dimensiones. El sistema explicado utiliza un esquema de tipo líder-seguidores, y se basa en la utilización del algoritmo Fast Marching Square para el cálculo de la trayectoria del líder. Después, mientras el líder recorre el camino, la formación se va adaptando al entorno para evitar la colisión de los robots con los obstáculos. El esquema de deformación presentado se basa en la información sobre el entorno previamente calculada con Fast Marching Square. El algoritmo es probado a través de distintas simulaciones en escenarios complejos. Los resultados son analizados estudiando principalmente dos características: cantidad de deformación necesaria y seguridad de los caminos de los robots. Aunque los resultados son satisfactorios en ambos aspectos, es deseable que en un futuro se realicen simulaciones más realistas y, finalmente, se implemente el sistema en robots reales. El siguiente capítulo nace de la misma idea, el control de formaciones de robots. Este concepto es usado para modelar el sistema brazo-mano del robot Manfred. Al igual que en el caso de una formación de robots, el sistema al completo incluye un número muy elevado de grados de libertad que dificulta la planificación de trayectorias. Sin embargo, la adaptación del esquema de control de formaciones para el brazo-mano robótico nos permite reducir la complejidad a la hora de hacer la planificación de trayectorias. Al igual que antes, el sistema se basa en el uso de Fast Marching Square. Además, se ha construido un esquema completo que permite modelar el entorno, calcular posibles posiciones para el agarre, y planificar los movimientos para realizarlo. Todo ello ha sido implementado en el robot Manfred, realizando pruebas de agarre con objetos reales. Los resultados muestran el potencial del uso de este esquema de control, dejando lugar para mejoras, fundamentalmente en el apartado de la modelización de objetos y en el cálculo y elección de los posibles agarres. A continuación, se trata de cerrar el lazo de control en el agarre de objetos. Una vez un sistema robótico ha realizado los movimientos necesarios para obtener un agarre estable, la posición final del objeto dentro de la mano resulta, en la mayoría de las ocasiones, distinta de la que se había planificado. Este hecho es debido a la acumulación de fallos en los sistemas de percepción y modelado del entorno, y los de planificación y ejecución de movimientos. Por ello, se propone un sistema Bayesiano basado en un filtro de partículas que, teniendo en cuenta la posición de la palma y los dedos de la mano, los datos de sensores táctiles y la forma del objeto, estima la posición del objeto dentro de la mano. El sistema parte de una posición inicial conocida, y empieza a ejecutarse después del primer contacto entre los dedos y el objeto, de manera que sea capaz de detectar los movimientos que se producen al realizar la fuerza necesaria para estabilizar el agarre. Los resultados muestran la validez del método. Sin embargo, también queda claro que, usando únicamente la información táctil y de posición, hay grados de libertad que no se pueden determinar, por lo que, para el futuro, resultaría aconsejable la combinación de este sistema con otro basado en visión. Finalmente se incluyen 2 anexos que profundizan en la implementación de la solución del algoritmo de Fast Marching y la presentación de los sistemas robóticos reales que se han usado en las distintas pruebas de la tesis.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlos Balaguer Bernaldo de Quirós.- Secretario: Raúl Suárez Feijoo.- Vocal: Pedro U. Lim

    3D Path Planning for Autonomous Aerial Vehicles in Constrained Spaces

    Get PDF
    corecore