369 research outputs found

    Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

    Get PDF
    In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing

    Fault estimation and accommodation for virtual sensor bias fault in image-based visual servoing using particle filter

    Get PDF
    This study develops a fault estimation and accommodation scheme for the image-based visual servoing (IBVS) system to eliminate the effects of the faults due to the image feature extraction task, which is named as bias virtual sensor fault. First, a bias virtual sensor fault in visual servoing is declared. Then, fault diagnosis (FD), which includes fault detection, isolation and estimation, is designed based on the means of particle filter (PF). Finally, a fault accommodation law is developed based on the information obtained from the fault estimation to compensate for the effects of the fault in the system. The proposed fault estimation and accommodation is verified through simulation and experimental studies, and the results show that the system can estimate and eliminate the unknown fault effects effectively

    Visual Servoing NMPC Applied to UAVs for Photovoltaic Array Inspection

    Full text link
    The photovoltaic (PV) industry is seeing a significant shift toward large-scale solar plants, where traditional inspection methods have proven to be time-consuming and costly. Currently, the predominant approach to PV inspection using unmanned aerial vehicles (UAVs) is based on photogrammetry. However, the photogrammetry approach presents limitations, such as an increased amount of useless data during flights, potential issues related to image resolution, and the detection process during high-altitude flights. In this work, we develop a visual servoing control system applied to a UAV with dynamic compensation using a nonlinear model predictive control (NMPC) capable of accurately tracking the middle of the underlying PV array at different frontal velocities and height constraints, ensuring the acquisition of detailed images during low-altitude flights. The visual servoing controller is based on the extraction of features using RGB-D images and the Kalman filter to estimate the edges of the PV arrays. Furthermore, this work demonstrates the proposal in both simulated and real-world environments using the commercial aerial vehicle (DJI Matrice 100), with the purpose of showcasing the results of the architecture. Our approach is available for the scientific community in: https://github.com/EPVelasco/VisualServoing_NMPCComment: This paper is under review at the journal "IEEE Robotics and Automation Letters

    Brain over Brawn -- Using a Stereo Camera to Detect, Track and Intercept a Faster UAV by Reconstructing Its Trajectory

    Full text link
    The work presented in this paper demonstrates our approach to intercepting a faster intruder UAV, inspired by the MBZIRC2020 Challenge 1. By leveraging the knowledge of the shape of the intruder's trajectory we are able to calculate the interception point. Target tracking is based on image processing by a YOLOv3 Tiny convolutional neural network, combined with depth calculation using a gimbal-mounted ZED Mini stereo camera. We use RGB and depth data from ZED Mini to extract the 3D position of the target, for which we devise a histogram-of-depth based processing to reduce noise. Obtained 3D measurements of target's position are used to calculate the position, the orientation and the size of a figure-eight shaped trajectory, which we approximate using lemniscate of Bernoulli. Once the approximation is deemed sufficiently precise, measured by Hausdorff distance between measurements and the approximation, an interception point is calculated to position the intercepting UAV right on the path of the target. The proposed method, which has been significantly improved based on the experience gathered during the MBZIRC competition, has been validated in simulation and through field experiments. The results confirmed that an efficient visual perception module which extracts information related to the motion of the target UAV as a basis for the interception, has been developed. The system is able to track and intercept the target which is 30% faster than the interceptor in majority of simulation experiments. Tests in the unstructured environment yielded 9 out of 12 successful results.Comment: To be published in Field Robotics. UAV-Eagle dataset available at: https://github.com/larics/UAV-Eagl

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

    Get PDF
    In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing

    Vision-based Autonomous Tracking of a Non-cooperative Mobile Robot by a Low-cost Quadrotor Vehicle

    Get PDF
    The goal of this thesis is the detection and tracking of a ground vehicle, in particular a car-like robot, by a quadrotor. The first challenge to address in any pursuit or tracking scenario is the detection and unique identification of the target. From this first challenge, comes the need to precisely localize the target in a coordinate system that is common to the tracking and tracked vehicles. In most real-life scenarios, the tracked vehicle does not directly communicate information such as its position to the tracking one. From this fact, arises a non-cooperative constraint problem. The autonomous tracking aspect of the mission requires, for both the aerial and ground vehicles, robust pose estimation during the mission. The primary and crucial functions to achieve autonomous behaviors are control and navigation. The principal-agent being the quadrotor, this thesis explains in detail the derivation and analysis of the equations of motion that govern its natural behavior along with the control methods that permit to achieve desired performances. The analysis of these equations reveals a naturally unstable system, subject to non-linearities. Therefore, we explored three different control methods capable of guaranteeing stability while mitigating non-linearities. The first two control methods operate in the linear region and consist of the intuitive Proportional Integrate Derivative controller (PID). The second linear control strategy is represented by an optimal controller that is the Linear Quadratic Regulator controller (LQR). The last and final control method is a nonlinear controller designed from the Sliding Mode Control Theory. In addition to the in-depth analysis, we provide assets and limitations of each control method. In order to achieve the tracking mission, we address the detection and localization problems using respectively visual servoing and frame transform techniques. The pose estimation challenge for the aerial robot is cleared up using Kalman Filtering estimation methods that are also explored in depth. The same estimation method is used to mitigate the ground vehicle’s real-time pose estimation and tracking problem. Analysis results are illustrated using Matlab. A simulation and a real implementation using the Robot Operating System are used to support the obtained results

    MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL

    Full text link
    [EN] This thesis deals with two characteristic problems in visual feedback robot control: 1) sensor latency; 2) providing suitable trajectories for the robot and for the measurement in the image. All the approaches presented in this work are analyzed and implemented on a 6 DOF industrial robot manipulator or/and a wheeled robot. Focusing on the sensor latency problem, this thesis proposes the use of dual-rate high order holds within the control loop of robots. In this sense, the main contributions are: - Dual-rate high order holds based on primitive functions for robot control (Chapter 3): analysis of the system performance with and without the use of this multi-rate technique from non-conventional control. In addition, as consequence of the use of dual-rate holds, this work obtains and validates multi-rate controllers, especially dual-rate PIDs. - Asynchronous dual-rate high order holds based on primitive functions with time delay compensation (Chapter 3): generalization of asynchronous dual-rate high order holds incorporating an input signal time delay compensation component, improving thus the inter-sampling estimations computed by the hold. It is provided an analysis of the properties of such dual-rate holds with time delay compensation, comparing them with estimations obtained by the equivalent dual-rate holds without this compensation, as well as their implementation and validation within the control loop of a 6 DOF industrial robot manipulator. - Multi-rate nonlinear high order holds (Chapter 4): generalization of the concept of dual-rate high order holds with nonlinear estimation models, which include information about the plant to be controlled, the controller(s) and sensor(s) used, obtained from machine learning techniques. Thus, in order to obtain such a nonlinear hold, it is described a methodology non dependent of the machine technique used, although validated using artificial neural networks. Finally, an analysis of the properties of these new holds is carried out, comparing them with their equivalents based on primitive functions, as well as their implementation and validation within the control loop of an industrial robot manipulator and a wheeled robot. With respect to the problem of providing suitable trajectories for the robot and for the measurement in the image, this thesis presents the novel reference features filtering control strategy and its generalization from a multi-rate point of view. The main contributions in this regard are: - Reference features filtering control strategy (Chapter 5): a new control strategy is proposed to enlarge significantly the solution task reachability of robot visual feedback control. The main idea is to use optimal trajectories proposed by a non-linear EKF predictor-smoother (ERTS), based on Rauch-Tung-Striebel (RTS) algorithm, as new feature references for an underlying visual feedback controller. In this work it is provided both the description of the implementation algorithm and its implementation and validation utilizing an industrial robot manipulator. - Dual-rate Reference features filtering control strategy (Chapter 5): a generalization of the reference features filtering approach from a multi-rate point of view, and a dual Kalman-smoother step based on the relation of the sensor and controller frequencies of the reference filtering control strategy is provided, reducing the computational cost of the former algorithm, as well as addressing the problem of the sensor latency. The implementation algorithms, as well as its analysis, are described.[ES] La presente tesis propone soluciones para dos problemas característicos de los sistemas robóticos cuyo bucle de control se cierra únicamente empleando sensores de visión artificial: 1) la latencia del sensor; 2) la obtención de trayectorias factibles tanto para el robot así como para las medidas obtenidas en la imagen. Todos los métodos propuestos en este trabajo son analizados, validados e implementados utilizando brazo robot industrial de 6 grados de libertad y/o en un robot con ruedas. Atendiendo al problema de la latencia del sensor, esta tesis propone el uso de retenedores bi-frequencia de orden alto dentro de los lazos de control de robots. En este aspecto las principales contribuciones son: -Retenedores bi-frecuencia de orden alto basados en funciones primitivas dentro de lazos de control de robots (Capítulo 3): análisis del comportamiento del sistema con y sin el uso de esta técnica de control no convencional. Además, como consecuencia del empleo de los retenedores, obtención y validación de controladores multi-frequencia, concretamente de PIDs bi-frecuencia. -Retenedores bi-frecuencia asíncronos de orden alto basados en funciones primitivas con compensación de retardos (Capítulo 3): generalización de los retenedores bi-frecuencia asíncronos de orden alto incluyendo una componente de compensación del retardo en la señal de entrada, mejorando así las estimaciones inter-muestreo calculadas por el retenedor. Se proporciona un análisis de las propiedades de los retenedores con compensación del retardo, comparándolas con las obtenidas por sus predecesores sin compensación, así como su implementación y validación en un brazo robot de 6 grados de libertad. -Retenedores multi-frecuencia no lineales de orden alto (Capítulo 4): generalización del concepto de retenedor bi-frecuencia de orden alto con modelos de estimación no lineales, los cuales incluyen información tanto de la planta a controlar, como del controlador(es) y sensor(es) empleado(s), obtenida a partir de técnicas de aprendizaje. Así pues, para obtener dicho retenedor no lineal, se describe una metodología independiente de la herramienta de aprendizaje utilizada, aunque validada con el uso de redes neuronales artificiales. Finalmente se realiza un análisis de las propiedades de estos nuevos retenedores, comparándolos con sus predecesores basados en funciones primitivas, así como su implementación y validación en un brazo robot de 6 grados de libertad y en un robot móvil con ruedas. Por lo que respecta al problema de generación de trayectorias factibles para el robot y para la medida en la imagen, esta tesis propone la nueva estrategia de control basada en el filtrado de la referencia y su generalización desde el punto de vista multi-frecuencial. -Estrategia de control basada en el filtrado de la referencia (Capítulo 5): una nueva estrategia de control se propone para ampliar significativamente el espacio de soluciones de los sistemas robóticos realimentados con sensores de visión artificial. La principal idea es utilizar las trayectorias óptimas obtenidas por una trayectoria predicha por un filtro de Kalman seguido de un suavizado basado en el algoritmo Rauch-Tung-Striebel (RTS) como nuevas referencias para un controlador dado. En este trabajo se proporciona tanto la descripción del algoritmo como su implementación y validación empleando un brazo robótico industrial. -Estrategia de control bi-frecuencia basada en el filtrado de la referencia (Capítulo 5): generalización de la estrategia de control basada en filtrado de la referencia desde un punto de vista multi-frecuencial, con un filtro de Kalman multi-frecuencia y un Kalman-smoother dual basado en la relación existente entre las frecuencias del sensor y del controlador, reduciendo así el coste computacional del algoritmo y, al mismo tiempo, dando solución al problema de la latencia del sensor. La validación se realiza utilizando un barzo robot industria asi[CA] La present tesis proposa solucions per a dos problemes característics dels sistemes robòtics el els que el bucle de control es tanca únicament utilitzant sensors de visió artificial: 1) la latència del sensor; 2) l'obtenció de trajectòries factibles tant per al robot com per les mesures en la imatge. Tots els mètodes proposats en aquest treball son analitzats, validats e implementats utilitzant un braç robot industrial de 6 graus de llibertat i/o un robot amb rodes. Atenent al problema de la latència del sensor, esta tesis proposa l'ús de retenidors bi-freqüència d'ordre alt a dins del llaços de control de robots. Al respecte, les principals contribucions son: - Retenidors bi-freqüència d'ordre alt basats en funcions primitives a dintre dels llaços de control de robots (Capítol 3): anàlisis del comportament del sistema amb i sense l'ús d'aquesta tècnica de control no convencional. A més a més, com a conseqüència de l'ús dels retenidors, obtenció i validació de controladors multi-freqüència, concretament de PIDs bi-freqüència. - Retenidors bi-freqüència asíncrons d'ordre alt basats en funcions primitives amb compensació de retards (Capítol 3): generalització dels retenidors bi-freqüència asíncrons d'ordre alt inclouen una component de compensació del retràs en la senyal d'entrada al retenidor, millorant així les estimacions inter-mostreig calculades per el retenidor. Es proporciona un anàlisis de les propietats dels retenidors amb compensació del retràs, comparant-les amb les obtingudes per el seus predecessors sense la compensació, així com la seua implementació i validació en un braç robot industrial de 6 graus de llibertat. - Retenidors multi-freqüència no-lineals d'ordre alt (Capítol 4): generalització del concepte de retenidor bi-freqüència d'ordre alt amb models d'estimació no lineals, incloent informació tant de la planta a controlar, com del controlador(s) i sensor(s) utilitzat(s), obtenint-la a partir de tècniques d'aprenentatge. Així doncs, per obtindre el retenidor no lineal, es descriu una metodologia independent de la ferramenta d'aprenentatge utilitzada, però validada amb l'ús de rets neuronals artificials. Finalment es realitza un anàlisis de les propietats d'aquestos nous retenidors, comparant-los amb els seus predecessors basats amb funcions primitives, així com la seua implementació i validació amb un braç robot de 6 graus de llibertat i amb un robot mòbil de rodes. Per el que respecta al problema de generació de trajectòries factibles per al robot i per la mesura en la imatge, aquesta tesis proposa la nova estratègia de control basada amb el filtrat de la referència i la seua generalització des de el punt de vista multi-freqüència. - Estratègia de control basada amb el filtrat de la referència (Capítol 5): una nova estratègia de control es proposada per ampliar significativament l'espai de solucions dels sistemes robòtics realimentats amb sensors de visió artificial. La principal idea es la d'utilitzar les trajectòries optimes obtingudes per una trajectòria predita per un filtre de Kalman seguit d'un suavitzat basat en l'algoritme Rauch-Tung-Striebel (RTS) com noves referències per a un control donat. En aquest treball es proporciona tant la descripció del algoritme així com la seua implementació i validació utilitzant un braç robòtic industrial de 6 graus de llibertat. - Estratègia de control bi-freqüència basada en el filtrat (Capítol 5): generalització de l'estratègia de control basada am filtrat de la referència des de un punt de vista multi freqüència, amb un filtre de Kalman multi freqüència i un Kalman-Smoother dual basat amb la relació existent entre les freqüències del sensor i del controlador, reduint així el cost computacional de l'algoritme i, al mateix temps, donant solució al problema de la latència del sensor. L'algoritme d'implementació d'aquesta tècnica, així com la seua validaciSolanes Galbis, JE. (2015). MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57951TESI

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration
    corecore