202 research outputs found

    Aerial Robotics for Inspection and Maintenance

    Get PDF
    Aerial robots with perception, navigation, and manipulation capabilities are extending the range of applications of drones, allowing the integration of different sensor devices and robotic manipulators to perform inspection and maintenance operations on infrastructures such as power lines, bridges, viaducts, or walls, involving typically physical interactions on flight. New research and technological challenges arise from applications demanding the benefits of aerial robots, particularly in outdoor environments. This book collects eleven papers from different research groups from Spain, Croatia, Italy, Japan, the USA, the Netherlands, and Denmark, focused on the design, development, and experimental validation of methods and technologies for inspection and maintenance using aerial robots

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    Multirotor UAS Sense and Avoid with Sensor Fusion

    Get PDF
    In this thesis, the key concepts of independent autonomous Unmanned Aircraft Systems (UAS) are explored including obstacle detection, dynamic obstacle state estimation, and avoidance strategy. This area is explored in pursuit of determining the viability of UAS Sense and Avoid (SAA) in static and dynamic operational environments. This exploration is driven by dynamic simulation and post-processing of real-world data. A sensor suite comprised of a 3D Light Detection and Ranging (LIDAR) sensor, visual camera, and 9 Degree of Freedom (DOF) Inertial Measurement Unit (IMU) was found to be beneficial to autonomous UAS SAA in urban environments. Promising results are based on to the broadening of available information about a dynamic or fixed obstacle via pixel-level LIDAR point cloud fusion and the combination of inertial measurements and LIDAR point clouds for localization purposes. However, there is still a significant amount of development required to optimize a data fusion method and SAA guidance method

    Vision Based Collaborative Localization and Path Planning for Micro Aerial Vehicles

    Get PDF
    Autonomous micro aerial vehicles (MAV) have gained immense popularity in both the commercial and research worlds over the last few years. Due to their small size and agility, MAVs are considered to have great potential for civil and industrial tasks such as photography, search and rescue, exploration, inspection and surveillance. Autonomy on MAVs usually involves solving the major problems of localization and path planning. While GPS is a popular choice for localization for many MAV platforms today, it suffers from issues such as inaccurate estimation around large structures, and complete unavailability in remote areas/indoor scenarios. From the alternative sensing mechanisms, cameras arise as an attractive choice to be an onboard sensor due to the richness of information captured, along with small size and inexpensiveness. Another consideration that comes into picture for micro aerial vehicles is the fact that these small platforms suffer from inability to fly for long amounts of time or carry heavy payload, scenarios that can be solved by allocating a group, or a swarm of MAVs to perform a task than just one. Collaboration between multiple vehicles allows for better accuracy of estimation, task distribution and mission efficiency. Combining these rationales, this dissertation presents collaborative vision based localization and path planning frameworks. Although these were created as two separate steps, the ideal application would contain both of them as a loosely coupled localization and planning algorithm. A forward-facing monocular camera onboard each MAV is considered as the sole sensor for computing pose estimates. With this minimal setup, this dissertation first investigates methods to perform feature-based localization, with the possibility of fusing two types of localization data: one that is computed onboard each MAV, and the other that comes from relative measurements between the vehicles. Feature based methods were preferred over direct methods for vision because of the relative ease with which tangible data packets can be transferred between vehicles, and because feature data allows for minimal data transfer compared to large images. Inspired by techniques from multiple view geometry and structure from motion, this localization algorithm presents a decentralized full 6-degree of freedom pose estimation method complete with a consistent fusion methodology to obtain robust estimates only at discrete instants, thus not requiring constant communication between vehicles. This method was validated on image data obtained from high fidelity simulations as well as real life MAV tests. These vision based collaborative constraints were also applied to the problem of path planning with a focus on performing uncertainty-aware planning, where the algorithm is responsible for generating not only a valid, collision-free path, but also making sure that this path allows for successful localization throughout. As joint multi-robot planning can be a computationally intractable problem, planning was divided into two steps from a vision-aware perspective. As the first step for improving localization performance is having access to a better map of features, a next-best-multi-view algorithm was developed which can compute the best viewpoints for multiple vehicles that can improve an existing sparse reconstruction. This algorithm contains a cost function containing vision-based heuristics that determines the quality of expected images from any set of viewpoints; which is minimized through an efficient evolutionary strategy known as Covariance Matrix Adaption (CMA-ES) that can handle very high dimensional sample spaces. In the second step, a sampling based planner called Vision-Aware RRT* (VA-RRT*) was developed which includes similar vision heuristics in an information gain based framework in order to drive individual vehicles towards areas that can benefit feature tracking and thus localization. Both steps of the planning framework were tested and validated using results from simulation

    development of a software to optimize and plan the acquisitions from uav and a first application in a post seismic environment

    Get PDF
    AbstractAn Unmanned Aerial Vehicle (UAV) is an aircraft without a human pilot on board. UAVs allow close-range photogrammetric acquisitions potentially useful for building large-scale cartography and acquisitions of building geometry. This is particularly useful in emergency situations where major accessibility problems limit the possibility of using conventional surveys. Presently, however, flights of this class of UAV are planned based only on the pilot's experience and they often acquire three or more times the number of images needed. This is clearly a time-consuming and autonomy-reducing procedure, which is certainly detrimental when extensive surveys are needed. For this reason new software, to plan the UAV's survey will be illustrated

    Integrated Sensor Orientation on Micro Aerial Vehicles

    Get PDF
    Mapping with Micro Aerial Vehicles (MAVs whose weight does not exceed 5 kg) is gaining importance in applications, such as corridor mapping, road and pipeline inspections, or mapping of large areas with homogeneous surface structure, e.g. forest or agricultural fields. When cm-level accuracy is required, the classical approach of sensor orientation does not deliver satisfactory results unless a large number of ground control points (GCPs) is regularly distributed in the mapped area. This may not be a feasible method either due to the associated costs or terrain inaccessibility. This thesis addresses such issues by presenting a development of MAV platforms with navigation and imaging sensors that are able to perform integrated sensor orientation (ISO). This method combines image measurements with GNSS or GNSS/IMU (Global Navigation Satellite System/Inertial Measurement Unit) observations. This innovative approach allows mapping with cm-level accuracy without the support of GCPs, even in geometrically challenging scenarios, such as corridors. The presented solution also helps in situations where automatic image observations cannot be generated, e.g. over water, sand, or other surfaces with low variations of texture. The application of ISO to MAV photogrammetry is a novel solution and its implementation brings new engineering and research challenges due to a limited payload capacity and quality of employed sensors on-board. These challenges are addressed using traditional as well as novel methods of treating observations within the developed processing software. The capability of the constructed MAV platforms and processing tools is tested in real mapping scenarios. It is empirically confirmed that accurate aerial control combined with a state-of-the-art calibration and processing can deliver cm-level ground accuracy, even in the most demanding projects. This thesis also presents an innovative way of mission planning in challenging environments. Indeed, a thorough pre-flight analysis is important not only for obtaining satisfactory mapping quality, but photogrammetric missions must be carried out in compliance with state regulations

    Design, Development and Implementation of Intelligent Algorithms to Increase Autonomy of Quadrotor Unmanned Missions

    Get PDF
    This thesis presents the development and implementation of intelligent algorithms to increase autonomy of unmanned missions for quadrotor type UAVs. A six-degree-of freedom dynamic model of a quadrotor is developed in Matlab/Simulink in order to support the design of control algorithms previous to real-time implementation. A dynamic inversion based control architecture is developed to minimize nonlinearities and improve robustness when the system is driven outside bounds of nominal design. The design and the implementation of the control laws are described. An immunity-based architecture is introduced for monitoring quadrotor health and its capabilities for detecting abnormal conditions are successfully demonstrated through flight testing. A vision-based navigation scheme is developed to enhance the quadrotor autonomy under GPS denied environments. An optical flow sensor and a laser range finder are used within an Extended Kalman Filter for position estimation and its estimation performance is analyzed by comparing against measurements from a GPS module. Flight testing results are presented where the performances are analyzed, showing a substantial increase of controllability and tracking when the developed algorithms are used under dynamically changing environments. Healthy flights, flights with failures, flight with GPS-denied navigation and post-failure recovery are presented

    Rotorcraft Blade Pitch Control Through Torque Modulation

    Get PDF
    Micro air vehicle (MAV) technology has broken with simple mimicry of manned aircraft in order to fulfill emerging roles which demand low-cost reliability in the hands of novice users, safe operation in confined spaces, contact and manipulation of the environment, or merging vertical flight and forward flight capabilities. These specialized needs have motivated a surge of new specialized aircraft, but the majority of these design variations remain constrained by the same fundamental technologies underpinning their thrust and control. This dissertation solves the problem of simultaneously governing MAV thrust, roll, and pitch using only a single rotor and single motor. Such an actuator enables new cheap, robust, and light weight aircraft by eliminating the need for the complex ancillary controls of a conventional helicopter swashplate or the distributed propeller array of a quadrotor. An analytic model explains how cyclic blade pitch variations in a special passively articulated rotor may be obtained by modulating the main drive motor torque in phase with the rotor rotation. Experiments with rotors from 10 cm to 100 cm in diameter confirm the predicted blade lag, pitch, and flap motions. We show the operating principle scales similarly as traditional helicopter rotor technologies, but is subject to additional new dynamics and technology considerations. Using this new rotor, experimental aircraft from 29 g to 870 g demonstrate conventional flight capabilities without requiring more than two motors for actuation. In addition, we emulate the unusual capabilities of a fully actuated MAV over six degrees of freedom using only the thrust vectoring qualities of two teetering rotors. Such independent control over forces and moments has been previously obtained by holonomic or omnidirection multirotors with at least six motors, but we now demonstrate similar abilities using only two. Expressive control from a single actuator enables new categories of MAV, illustrated by experiments with a single actuator aircraft with spatial control and a vertical takeoff and landing airplane whose flight authority is derived entirely from two rotors
    • …
    corecore