112 research outputs found

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Human motion estimation and controller learning

    Get PDF
    Humans are capable of complex manipulation and locomotion tasks. They are able to achieve energy-efficient gait, reject disturbances, handle changing loads, and adapt to environmental constraints. Using inspiration from the human body, robotics researchers aim to develop systems with similar capabilities. Research suggests that humans minimize a task specific cost function when performing movements. In order to learn this cost function from demonstrations and incorporate it into a controller, it is first imperative to accurately estimate the expert motion. The captured motions can then be analyzed to extract the objective function the expert was minimizing. We propose a framework for human motion estimation from wearable sensors. Human body joints are modeled by matrix Lie groups, using special orthogonal groups SO(2) and SO(3) for joint pose and special Euclidean group SE(3) for base link pose representation. To estimate the human joint pose, velocity and acceleration, we provide the equations for employing the extended Kalman Filter on Lie Groups, thus explicitly accounting for the non-Euclidean geometry of the state space. Incorporating interaction constraints with respect to the environment or within the participant allows us to track global body position without an absolute reference and ensure viable pose estimate. The algorithms are extensively validated in both simulation and real-world experiments. Next, to learn underlying expert control strategies from the expert demonstrations we present a novel fast approximate multi-variate Gaussian Process regression. The method estimates the underlying cost function, without making assumptions on its structure. The computational efficiency of the approach allows for real time forward horizon prediction. Using a linear model predictive control framework we then reproduce the demonstrated movements on a robot. The learned cost function captures the variability in expert motion as well as the correlations between states, leading to a controller that both produces motions and reacts to disturbances in a human-like manner. The model predictive control formulation allows the controller to satisfy task and joint space constraints avoiding obstacles and self collisions, as well as torque constraints, ensuring operational feasibility. The approach is validated on the Franka Emika robot using real human motion exemplars

    Characterisation and State Estimation of Magnetic Soft Continuum Robots

    Get PDF
    Minimally invasive surgery has become more popular as it leads to less bleeding, scarring, pain, and shorter recovery time. However, this has come with counter-intuitive devices and steep surgeon learning curves. Magnetically actuated Soft Continuum Robots (SCR) have the potential to replace these devices, providing high dexterity together with the ability to conform to complex environments and safe human interactions without the cognitive burden for the clinician. Despite considerable progress in the past decade in their development, several challenges still plague SCR hindering their full realisation. This thesis aims at improving magnetically actuated SCR by addressing some of these challenges, such as material characterisation and modelling, and sensing feedback and localisation. Material characterisation for SCR is essential for understanding their behaviour and designing effective modelling and simulation strategies. In this work, the material properties of commonly employed materials in magnetically actuated SCR, such as elastic modulus, hyper-elastic model parameters, and magnetic moment were determined. Additionally, the effect these parameters have on modelling and simulating these devices was investigated. Due to the nature of magnetic actuation, localisation is of utmost importance to ensure accurate control and delivery of functionality. As such, two localisation strategies for magnetically actuated SCR were developed, one capable of estimating the full 6 degrees of freedom (DOFs) pose without any prior pose information, and another capable of accurately tracking the full 6-DOFs in real-time with positional errors lower than 4~mm. These will contribute to the development of autonomous navigation and closed-loop control of magnetically actuated SCR

    On the Enhancement of the Localization of Autonomous Mobile Platforms

    Get PDF
    The focus of many industrial and research entities on achieving full robotic autonomy increased in the past few years. In order to achieve full robotic autonomy, a fundamental problem is the localization, which is the ability of a mobile platform to determine its position and orientation in the environment. In this thesis, several problems related to the localization of autonomous platforms are addressed, namely, visual odometry accuracy and robustness; uncertainty estimation in odometries; and accurate multi-sensor fusion-based localization. Beside localization, the control of mobile manipulators is also tackled in this thesis. First, a generic image processing pipeline is proposed which, when integrated with a feature-based Visual Odometry (VO), can enhance robustness, accuracy and reduce the accumulation of errors (drift) in the pose estimation. Afterwards, since odometries (e.g. wheel odometry, LiDAR odometry, or VO) suffer from drift errors due to integration, and because such errors need to be quantified in order to achieve accurate localization through multi-sensor fusion schemes (e.g. extended or unscented kalman filters). A covariance estimation algorithm is proposed, which estimates the uncertainty of odometry measurements using another sensor which does not rely on integration. Furthermore, optimization-based multi-sensor fusion techniques are known to achieve better localization results compared to filtering techniques, but with higher computational cost. Consequently, an efficient and generic multi-sensor fusion scheme, based on Moving Horizon Estimation (MHE), is developed. The proposed multi-sensor fusion scheme: is capable of operating with any number of sensors; and considers different sensors measurements rates, missing measurements, and outliers. Moreover, the proposed multi-sensor scheme is based on a multi-threading architecture, in order to reduce its computational cost, making it more feasible for practical applications. Finally, the main purpose of achieving accurate localization is navigation. Hence, the last part of this thesis focuses on developing a stabilization controller of a 10-DOF mobile manipulator based on Model Predictive Control (MPC). All of the aforementioned works are validated using numerical simulations; real data from: EU Long-term Dataset, KITTI Dataset, TUM Dataset; and/or experimental sequences using an omni-directional mobile robot. The results show the efficacy and importance of each part of the proposed work

    Topics in Machining with Industrial Robot Manipulators and Optimal Motion Control

    Get PDF
    Two main topics are considered in this thesis: Machining with industrial robot manipulators and optimal motion control of robots and vehicles. The motivation for research on the first subject is the need for flexible and accurate production processes employing industrial robots as their main component. The challenge to overcome here is to achieve high-accuracy machining solutions, in spite of the strong process forces required for the task. Because of the process forces, the nonlinear dynamics of the manipulator, such as the joint compliance and backlash, may significantly degrade the achieved machining accuracy of the manufactured part. In this thesis, a macro/micro-manipulator configuration is considered to the purpose of increasing the milling accuracy. In particular, a model-based control architecture is developed for control of the macro/micro-manipulator setup. The considered approach is validated by experimental results from extensive milling experiments in aluminium and steel. Related to the problem of high-accuracy milling is the topic of robot modeling. To this purpose, two different approaches are considered; modeling of the quasi-static joint dynamics and dynamic compliance modeling. The first problem is approached by an identification method for determining the joint stiffness and backlash. The second problem is approached by using gray-box identification based on subspace-identification methods. Both identification algorithms are evaluated experimentally. Finally, online state estimation is considered as a means to determine the workspace position and orientation of the robot tool. Kalman Filters and Rao-Blackwellized Particle Filters are employed to the purpose of sensor fusion of internal robot measurements and measurements from an inertial measurement unit for estimation of the desired states. The approaches considered are fully implemented and evaluated on experimental data. The second part of the thesis discusses optimal motion control applied to robot manipulators and road vehicles. A control architecture for online control of a robot manipulator in high-performance path tracking is developed, and the architecture is evaluated in extensive simulations. The main characteristic of the control strategy is that it combines coordinated feedback control along both the tangential and transversal directions of the path; this separation is achieved in the framework of natural coordinates. One motivation for research on optimal control of road vehicles in time-critical maneuvers is the desire to develop improved vehicle-safety systems. In this thesis, a method for solving optimal maneuvering problems using nonlinear optimization is discussed. More specifically, vehicle and tire modeling and the optimization formulations required to get useful solutions to these problems are investigated. The considered method is evaluated on different combinations of chassis and tire models, in maneuvers under different road conditions, and for investigation of optimal maneuvers in systems for electronic stability control. The obtained optimization results in simulations are evaluated and compared

    Agent and object aware tracking and mapping methods for mobile manipulators

    Get PDF
    The age of the intelligent machine is upon us. They exist in our factories, our warehouses, our military, our hospitals, on our roads, and on the moon. Most of these things we call robots. When placed in a controlled or known environment such as an automotive factory or a distribution warehouse they perform their given roles with exceptional efficiency, achieving far more than is within reach of a humble human being. Despite the remarkable success of intelligent machines in such domains, they have yet to make a full-hearted deployment into our homes. The missing link between the robots we have now and the robots that are soon to come to our houses is perception. Perception as we mean it here refers to a level of understanding beyond the collection and aggregation of sensory data. Much of the available sensory information is noisy and unreliable, our homes contain many reflective surfaces, repeating textures on large flat surfaces, and many disruptive moving elements, including humans. These environments change over time, with objects frequently moving within and between rooms. This idea of change in an environment is fundamental to robotic applications, as in most cases we expect them to be effectors of such change. We can identify two particular challenges1 that must be solved for robots to make the jump to less structured environments - how to manage noise and disruptive elements in observational data, and how to understand the world as a set of changeable elements (objects) which move over time within a wider environment. In this thesis we look at one possible approach to solving each of these problems. For the first challenge we use proprioception aboard a robot with an articulated arm to handle difficult and unreliable visual data caused both by the robot and the environment. We use sensor data aboard the robot to improve the pose tracking of a visual system when the robot moves rapidly, with high jerk, or when observing a scene with little visual variation. For the second challenge, we build a model of the world on the level of rigid objects, and relocalise them both as they change location between different sequences and as they move. We use semantics, image keypoints, and 3D geometry to register and align objects between sequences, showing how their position has moved between disparate observations.Open Acces

    Robust navigation for industrial service robots

    Get PDF
    Pla de Doctorats Industrials de la Generalitat de CatalunyaRobust, reliable and safe navigation is one of the fundamental problems of robotics. Throughout the present thesis, we tackle the problem of navigation for robotic industrial mobile-bases. We identify its components and analyze their respective challenges in order to address them. The research work presented here ultimately aims at improving the overall quality of the navigation stack of a commercially available industrial mobile-base. To introduce and survey the overall problem we first break down the navigation framework into clearly identified smaller problems. We examine the Simultaneous Localization and Mapping (SLAM) problem, recalling its mathematical grounding and exploring the state of the art. We then review the problem of planning the trajectory of a mobile-base toward a desired goal in the generated environment representation. Finally we investigate and clarify the use of the subset of the Lie theory that is useful in robotics. The first problem tackled is the recognition of place for closing loops in SLAM. Loop closure refers to the ability of a robot to recognize a previously visited location and infer geometrical information between its current and past locations. Using only a 2D laser range finder sensor, we address the problem using a technique borrowed from the field of Natural Language Processing (NLP) which has been successfully applied to image-based place recognition, namely the Bag-of-Words. We further improve the method with two proposals inspired from NLP. Firstly, the comparison of places is strengthened by considering the natural relative order of features in each individual sensor reading. Secondly, topological correspondences between places in a corpus of visited places are established in order to promote together instances that are ‘close’ to one another. We then tackle the problem of motion model calibration for odometry estimation. Given a mobile-base embedding an exteroceptive sensor able to observe ego-motion, we propose a novel formulation for estimating the intrinsic parameters of an odometry motion model. Resorting to an adaptation of the pre-integration theory initially developed for inertial motion sensors, we employ iterative nonlinear on-manifold optimization to estimate the wheel radii and wheel separation. The method is further extended to jointly estimate both the intrinsic parameters of the odometry model together with the extrinsic parameters of the embedded sensor. The method is shown to accommodate to variation in model parameters quickly when the vehicle is subject to physical changes during operation. Following the generation of a map in which the robot is localized, we address the problem of estimating trajectories for motion planning. We devise a new method for estimating a sequence of robot poses forming a smooth trajectory. Regardless of the Lie group considered, the trajectory is seen as a collection of states lying on a spline with non-vanishing n-th derivatives at each point. Formulated as a multi-objective nonlinear optimization problem, it allows for the addition of cost functions such as velocity and acceleration limits, collision avoidance and more. The proposed method is evaluated for two different motion planning tasks, the planning of trajectories for a mobile-base evolving in the SE(2) manifold, and the planning of the motion of a multi-link robotic arm whose end-effector evolves in the SE(3) manifold. From our study of Lie theory, we developed a new, ready to use, programming library called `manif’. The library is open source, publicly available and is developed following good software programming practices. It is designed so that it is easy to integrate and manipulate, and allows for flexible use while facilitating the possibility to extend it beyond the already implemented Lie groups.La navegación autónoma es uno de los problemas fundamentales de la robótica, y sus diferentes desafíos se han estudiado durante décadas. El desarrollo de métodos de navegación robusta, confiable y segura es un factor clave para la creación de funcionalidades de nivel superior en robots diseñados para operar en entornos con humanos. A lo largo de la presente tesis, abordamos el problema de navegación para bases robóticas móviles industriales; identificamos los elementos de un sistema de navegación; y analizamos y tratamos sus desafíos. El trabajo de investigación presentado aquí tiene como último objetivo mejorar la calidad general del sistema completo de navegación de una base móvil industrial disponible comercialmente. Para estudiar el problema de navegación, primero lo desglosamos en problemas menores claramente identificados. Examinamos el subproblema de mapeo del entorno y localización del robot simultáneamente (SLAM por sus siglas en ingles) y estudiamos el estado del arte del mismo. Al hacerlo, recordamos y detallamos la base matemática del problema de SLAM. Luego revisamos el subproblema de planificación de trayectorias hacia una meta deseada en la representación del entorno generada. Además, como una herramienta para las soluciones que se presentarán más adelante en el desarrollo de la tesis, investigamos y aclaramos el uso de teoría de Lie, centrándonos en el subconjunto de la teoría que es útil para la estimación de estados en robótica. Como primer elemento identificado para mejoras, abordamos el problema de reconocimiento de lugares para cerrar lazos en SLAM. El cierre de lazos se refiere a la capacidad de un robot para reconocer una ubicación visitada previamente e inferí información geométrica entre la ubicación actual del robot y aquellas reconocidas. Usando solo un sensor láser 2D, la tarea es desafiante ya que la percepción del entorno que proporciona el sensor es escasa y limitada. Abordamos el problema utilizando 'bolsas de palabras', una técnica prestada del campo de procesamiento del lenguaje natural (NLP) que se ha aplicado con éxito anteriormente al reconocimiento de lugares basado en imágenes. Nuestro método incluye dos nuevas propuestas inspiradas también en NLP. Primero, la comparación entre lugares candidatos se fortalece teniendo en cuenta el orden relativo natural de las características en cada lectura individual del sensor; y segundo, se establece un corpus de lugares visitados para promover juntos instancias que están "cerca" la una de la otra desde un punto de vista topológico. Evaluamos nuestras propuestas por separado y conjuntamente en varios conjuntos de datos, con y sin ruido, demostrando mejora en la detección de cierres de lazo para sensores láser 2D, con respecto al estado del arte. Luego abordamos el problema de la calibración del modelo de movimiento para la estimación de la edometría. Dado que nuestra base móvil incluye un sensor exteroceptivo capaz de observar el movimiento de la plataforma, proponemos una nueva formulación que permite estimar los parámetros intrínsecos del modelo cinemático de la plataforma durante el cómputo de la edometría del vehículo. Hemos recurrido a una adaptación de la teoría de reintegración inicialmente desarrollado para unidades inerciales de medida, y aplicado la técnica a nuestro modelo cinemático. El método nos permite, mediante optimización iterativa no lineal, la estimación del valor del radio de las ruedas de forma independiente y de la separación entre las mismas. El método se amplía posteriormente par idéntica de forma simultánea, estos parámetros intrínsecos junto con los parámetros extrínsecos que ubican el sensor láser con respecto al sistema de referencia de la base móvil. El método se valida en simulación y en un entorno real y se muestra que converge hacia los verdaderos valores de los parámetros. El método permite la adaptación de los parámetros intrínsecos del modelo cinemático de la plataforma derivados de cambios físicos durante la operación, tales como el impacto que el cambio de carga sobre la plataforma tiene sobre el diámetro de las ruedas. Como tercer subproblema de navegación, abordamos el reto de planificar trayectorias de movimiento de forma suave. Desarrollamos un método para planificar la trayectoria como una secuencia de configuraciones sobre una spline con n-ésimas derivadas en todos los puntos, independientemente del grupo de Lie considerado. Al ser formulado como un problema de optimización no lineal con múltiples objetivos, es posible agregar funciones de coste al problema de optimización que permitan añadir límites de velocidad o aceleración, evasión de colisiones, etc. El método propuesto es evaluado en dos tareas de planificación de movimiento diferentes, la planificación de trayectorias para una base móvil que evoluciona en la variedad SE(2), y la planificación del movimiento de un brazo robótico cuyo efector final evoluciona en la variedad SE(3). Además, cada tarea se evalúa en escenarios con complejidad de forma incremental, y se muestra un rendimiento comparable o mejor que el estado del arte mientras produce resultados más consistentes. Desde nuestro estudio de la teoría de Lie, desarrollamos una nueva biblioteca de programación llamada “manif”. La biblioteca es de código abierto, está disponible públicamente y se desarrolla siguiendo las buenas prácticas de programación de software. Esta diseñado para que sea fácil de integrar y manipular, y permite flexibilidad de uso mientras se facilita la posibilidad de extenderla más allá de los grupos de Lie inicialmente implementados. Además, la biblioteca se muestra eficiente en comparación con otras soluciones existentes. Por fin, llegamos a la conclusión del estudio de doctorado. Examinamos el trabajo de investigación y trazamos líneas para futuras investigaciones. También echamos un vistazo en los últimos años y compartimos una visión personal y experiencia del desarrollo de un doctorado industrial.Postprint (published version

    High-Speed Vision and Force Feedback for Motion-Controlled Industrial Manipulators

    Get PDF
    Over the last decades, both force sensors and cameras have emerged as useful sensors for different applications in robotics. This thesis considers a number of dynamic visual tracking and control problems, as well as the integration of these techniques with contact force control. Different topics ranging from basic theory to system implementation and applications are treated. A new interface developed for external sensor control is presented, designed by making non-intrusive extensions to a standard industrial robot control system. The structure of these extensions are presented, the system properties are modeled and experimentally verified, and results from force-controlled stub grinding and deburring experiments are presented. A novel system for force-controlled drilling using a standard industrial robot is also demonstrated. The solution is based on the use of force feedback to control the contact forces and the sliding motions of the pressure foot, which would otherwise occur during the drilling phase. Basic methods for feature-based tracking and servoing are presented, together with an extension for constrained motion estimation based on a dual quaternion pose parametrization. A method for multi-camera real-time rigid body tracking with time constraints is also presented, based on an optimal selection of the measured features. The developed tracking methods are used as the basis for two different approaches to vision/force control, which are illustrated in experiments. Intensity-based techniques for tracking and vision-based control are also developed. A dynamic visual tracking technique based directly on the image intensity measurements is presented, together with new stability-based methods suitable for dynamic tracking and feedback problems. The stability-based methods outperform the previous methods in many situations, as shown in simulations and experiments

    Visual Perception System for Aerial Manipulation: Methods and Implementations

    Get PDF
    La tecnología se evoluciona a gran velocidad y los sistemas autónomos están empezado a ser una realidad. Las compañías están demandando, cada vez más, soluciones robotizadas para mejorar la eficiencia de sus operaciones. Este también es el caso de los robots aéreos. Su capacidad única de moverse libremente por el aire los hace excelentes para muchas tareas que son tediosas o incluso peligrosas para operadores humanos. Hoy en día, la gran cantidad de sensores y drones comerciales los hace soluciones muy tentadoras. Sin embargo, todavía se requieren grandes esfuerzos de obra humana para customizarlos para cada tarea debido a la gran cantidad de posibles entornos, robots y misiones. Los investigadores diseñan diferentes algoritmos de visión, hardware y sensores para afrontar las diferentes tareas. Actualmente, el campo de la robótica manipuladora aérea está emergiendo con el objetivo de extender la cantidad de aplicaciones que estos pueden realizar. Estas pueden ser entre otras, inspección, mantenimiento o incluso operar válvulas u otras máquinas. Esta tesis presenta un sistema de manipulación aérea y un conjunto de algoritmos de percepción para la automatización de las tareas de manipulación aérea. El diseño completo del sistema es presentado y una serie de frameworks son presentados para facilitar el desarrollo de este tipo de operaciones. En primer lugar, la investigación relacionada con el análisis de objetos para manipulación y planificación de agarre considerando diferentes modelos de objetos es presentado. Dependiendo de estos modelos de objeto, se muestran diferentes algoritmos actuales de análisis de agarre y algoritmos de planificación para manipuladores simples y manipuladores duales. En Segundo lugar, el desarrollo de algoritmos de percepción para detección de objetos y estimación de su posicione es presentado. Estos permiten al sistema identificar objetos de cualquier tipo en cualquier escena para localizarlos para efectuar las tareas de manipulación. Estos algoritmos calculan la información necesaria para los análisis de manipulación descritos anteriormente. En tercer lugar. Se presentan algoritmos de visión para localizar el robot en el entorno al mismo tiempo que se elabora un mapa local, el cual es beneficioso para las tareas de manipulación. Estos mapas se enriquecen con información semántica obtenida en los algoritmos de detección. Por último, se presenta el desarrollo del hardware relacionado con la plataforma aérea, el cual incluye unos manipuladores de bajo peso y la invención de una herramienta para realizar tareas de contacto con superficies rígidas que sirve de estimador de la posición del robot. Todas las técnicas presentadas en esta tesis han sido validadas con extensiva experimentación en plataformas reales.Technology is growing fast, and autonomous systems are becoming a reality. Companies are increasingly demanding robotized solutions to improve the efficiency of their operations. It is also the case for aerial robots. Their unique capability of moving freely in the space makes them suitable for many tasks that are tedious and even dangerous for human operators. Nowadays, the vast amount of sensors and commercial drones makes them highly appealing. However, it is still required a strong manual effort to customize the existing solutions to each particular task due to the number of possible environments, robot designs and missions. Different vision algorithms, hardware devices and sensor setups are usually designed by researchers to tackle specific tasks. Currently, aerial manipulation is being intensively studied to allow aerial robots to extend the number of applications. These could be inspection, maintenance, or even operating valves or other machines. This thesis presents an aerial manipulation system and a set of perception algorithms for the automation aerial manipulation tasks. The complete design of the system is presented and modular frameworks are shown to facilitate the development of these kind of operations. At first, the research about object analysis for manipulation and grasp planning considering different object models is presented. Depend on the model of the objects, different state of art grasping analysis are reviewed and planning algorithms for both single and dual manipulators are shown. Secondly, the development of perception algorithms for object detection and pose estimation are presented. They allows the system to identify many kind of objects in any scene and locate them to perform manipulation tasks. These algorithms produce the necessary information for the manipulation analysis described in the previous paragraph. Thirdly, it is presented how to use vision to localize the robot in the environment. At the same time, local maps are created which can be beneficial for the manipulation tasks. These maps are are enhanced with semantic information from the perception algorithm mentioned above. At last, the thesis presents the development of the hardware of the aerial platform which includes the lightweight manipulators and the invention of a novel tool that allows the aerial robot to operate in contact with static objects. All the techniques presented in this thesis have been validated throughout extensive experimentation with real aerial robotic platforms

    Robust SLAM and motion segmentation under long-term dynamic large occlusions

    Get PDF
    Visual sensors are key to robot perception, which can not only help robot localisation but also enable robots to interact with the environment. However, in new environments, robots can fail to distinguish the static and dynamic components in the visual input. Consequently, robots are unable to track objects or localise themselves. Methods often require precise robot proprioception to compensate for camera movement and separate the static background from the visual input. However, robot proprioception, such as \ac{IMU} or wheel odometry, usually faces the problem of drift accumulation. The state-of-the-art methods demonstrate promising performance but either (1) require semantic segmentation, which is inaccessible in unknown environments, or (2) treat dynamic components as outliers -- which is unfeasible when dynamic objects occupy a large proportion of the visual input. This research work systematically unifies camera and multi-object tracking problems in indoor environments by proposing a multi-motion tracking system; and enables robots to differentiate the static and dynamic components in the visual input with the understanding of their own movements and actions. Detailed evaluation of both simulation environments and robotic platforms suggests that the proposed method outperforms the state-of-the-art dynamic SLAM methods when the majority of the camera view is occluded by multiple unmodeled objects over a long period of time
    corecore