4,789 research outputs found

    Accurate Landing of Unmanned Aerial Vehicles Using Ground Pattern Recognition

    Full text link
    [EN] Over the last few years, several researchers have been developing protocols and applications in order to autonomously land unmanned aerial vehicles (UAVs). However, most of the proposed protocols rely on expensive equipment or do not satisfy the high precision needs of some UAV applications such as package retrieval and delivery or the compact landing of UAV swarms. Therefore, in this work, a solution for high precision landing based on the use of ArUco markers is presented. In the proposed solution, a UAV equipped with a low-cost camera is able to detect ArUco markers sized 56×56 cm from an altitude of up to 30 m. Once the marker is detected, the UAV changes its flight behavior in order to land on the exact position where the marker is located. The proposal was evaluated and validated using both the ArduSim simulation platform and real UAV flights. The results show an average offset of only 11 cm from the target position, which vastly improves the landing accuracy compared to the traditional GPS-based landing, which typically deviates from the intended target by 1 to 3 m.This work was funded by the Ministerio de Ciencia, Innovación y Universidades, Programa Estatal de Investigación, Desarrollo e Innovación Orientada a los Retos de la Sociedad, Proyectos I+D+I 2018 , Spain, under Grant RTI2018-096384-B-I00.Wubben, J.; Fabra Collado, FJ.; Tavares De Araujo Cesariny Calafate, CM.; Krzeszowski, T.; Márquez Barja, JM.; Cano, J.; Manzoni, P. (2019). Accurate Landing of Unmanned Aerial Vehicles Using Ground Pattern Recognition. Electronics. 8(12):1-16. https://doi.org/10.3390/electronics8121532S116812Pan, X., Ma, D., Jin, L., & Jiang, Z. (2008). Vision-Based Approach Angle and Height Estimation for UAV Landing. 2008 Congress on Image and Signal Processing. doi:10.1109/cisp.2008.78Tang, D., Li, F., Shen, N., & Guo, S. (2011). UAV attitude and position estimation for vision-based landing. Proceedings of 2011 International Conference on Electronic & Mechanical Engineering and Information Technology. doi:10.1109/emeit.2011.6023131Gautam, A., Sujit, P. B., & Saripalli, S. (2014). A survey of autonomous landing techniques for UAVs. 2014 International Conference on Unmanned Aircraft Systems (ICUAS). doi:10.1109/icuas.2014.6842377Holybro Pixhawk 4 · PX4 v1.9.0 User Guidehttps://docs.px4.io/v1.9.0/en/flight_controller/pixhawk4.htmlGarrido-Jurado, S., Muñoz-Salinas, R., Madrid-Cuevas, F. J., & Medina-Carnicer, R. (2016). Generation of fiducial marker dictionaries using Mixed Integer Linear Programming. Pattern Recognition, 51, 481-491. doi:10.1016/j.patcog.2015.09.023Romero-Ramirez, F. J., Muñoz-Salinas, R., & Medina-Carnicer, R. (2018). Speeded up detection of squared fiducial markers. Image and Vision Computing, 76, 38-47. doi:10.1016/j.imavis.2018.05.004ArUco: Augmented reality library based on OpenCVhttps://sourceforge.net/projects/aruco/Jin, S., Zhang, J., Shen, L., & Li, T. (2016). On-board vision autonomous landing techniques for quadrotor: A survey. 2016 35th Chinese Control Conference (CCC). doi:10.1109/chicc.2016.7554984Chen, X., Phang, S. K., Shan, M., & Chen, B. M. (2016). System integration of a vision-guided UAV for autonomous landing on moving platform. 2016 12th IEEE International Conference on Control and Automation (ICCA). doi:10.1109/icca.2016.7505370Nowak, E., Gupta, K., & Najjaran, H. (2017). Development of a Plug-and-Play Infrared Landing System for Multirotor Unmanned Aerial Vehicles. 2017 14th Conference on Computer and Robot Vision (CRV). doi:10.1109/crv.2017.23Shaker, M., Smith, M. N. R., Yue, S., & Duckett, T. (2010). Vision-Based Landing of a Simulated Unmanned Aerial Vehicle with Fast Reinforcement Learning. 2010 International Conference on Emerging Security Technologies. doi:10.1109/est.2010.14Araar, O., Aouf, N., & Vitanov, I. (2016). Vision Based Autonomous Landing of Multirotor UAV on Moving Platform. Journal of Intelligent & Robotic Systems, 85(2), 369-384. doi:10.1007/s10846-016-0399-zPatruno, C., Nitti, M., Petitti, A., Stella, E., & D’Orazio, T. (2018). A Vision-Based Approach for Unmanned Aerial Vehicle Landing. Journal of Intelligent & Robotic Systems, 95(2), 645-664. doi:10.1007/s10846-018-0933-2Baca, T., Stepan, P., Spurny, V., Hert, D., Penicka, R., Saska, M., … Kumar, V. (2019). Autonomous landing on a moving vehicle with an unmanned aerial vehicle. Journal of Field Robotics, 36(5), 874-891. doi:10.1002/rob.21858De Souza, J. P. C., Marcato, A. L. M., de Aguiar, E. P., Jucá, M. A., & Teixeira, A. M. (2019). Autonomous Landing of UAV Based on Artificial Neural Network Supervised by Fuzzy Logic. Journal of Control, Automation and Electrical Systems, 30(4), 522-531. doi:10.1007/s40313-019-00465-ySITL Simulator (Software in the Loop)http://ardupilot.org/dev/docs/sitl-simulator-software-in-the-loop.htmlFabra, F., Calafate, C. T., Cano, J.-C., & Manzoni, P. (2017). On the impact of inter-UAV communications interference in the 2.4 GHz band. 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC). doi:10.1109/iwcmc.2017.7986413MAVLink Micro Air Vehicle Communication Protocolhttp://qgroundcontrol.org/mavlink/startFabra, F., Calafate, C. T., Cano, J. C., & Manzoni, P. (2018). ArduSim: Accurate and real-time multicopter simulation. Simulation Modelling Practice and Theory, 87, 170-190. doi:10.1016/j.simpat.2018.06.009Careem, M. A. A., Gomez, J., Saha, D., & Dutta, A. (2019). HiPER-V: A High Precision Radio Frequency Vehicle for Aerial Measurements. 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). doi:10.1109/sahcn.2019.882490

    An Adaptive Multi-Level Quantization-Based Reinforcement Learning Model for Enhancing UAV Landing on Moving Targets

    Get PDF
    The autonomous landing of an unmanned aerial vehicle (UAV) on a moving platform is an essential functionality in various UAV-based applications. It can be added to a teleoperation UAV system or part of an autonomous UAV control system. Various robust and predictive control systems based on the traditional control theory are used for operating a UAV. Recently, some attempts were made to land a UAV on a moving target using reinforcement learning (RL). Vision is used as a typical way of sensing and detecting the moving target. Mainly, the related works have deployed a deep-neural network (DNN) for RL, which takes the image as input and provides the optimal navigation action as output. However, the delay of the multi-layer topology of the deep neural network affects the real-time aspect of such control. This paper proposes an adaptive multi-level quantization-based reinforcement learning (AMLQ) model. The AMLQ model quantizes the continuous actions and states to directly incorporate simple Q-learning to resolve the delay issue. This solution makes the training faster and enables simple knowledge representation without needing the DNN. For evaluation, the AMLQ model was compared with state-of-art approaches and was found to be superior in terms of root mean square error (RMSE), which was 8.7052 compared with the proportional-integral-derivative (PID) controller, which achieved an RMSE of 10.0592

    Precision Landing of a Quadrotor UAV on a Moving Target Using Low-Cost Sensors

    Get PDF
    With the use of unmanned aerial vehicles (UAVs) becoming more widespread, a need for precise autonomous landings has arisen. In the maritime setting, precise autonomous landings will help to provide a safe way to recover UAVs deployed from a ship. On land, numerous applications have been proposed for UAV and unmanned ground vehicle (UGV) teams where autonomous docking is required so that the UGVs can either recover or service a UAV in the field. Current state of the art approaches to solving the problem rely on expensive inertial measurement sensors and RTK or differential GPS systems. However, such a solution is not practical for many UAV systems. A framework to perform precision landings on a moving target using low-cost sensors is proposed in this thesis. Vision from a downward facing camera is used to track a target on the landing platform and generate high quality relative pose estimates. The landing procedure consists of three stages. First, a rendezvous stage commands the quadrotor on a path to intercept the target. A target acquisition stage then ensures that the quadrotor is tracking the landing target. Finally, visual measurements of the relative pose to the landing target are used in the target tracking stage where control and estimation are performed in a body-planar frame, without the use of GPS or magnetometer measurements. A comprehensive overview of the control and estimation required to realize the three stage landing approach is presented. Critical parts of the landing framework were implemented on an AscTec Pelican testbed. The AprilTag visual fiducial system is chosen for use as the landing target. Implementation details to improve the AprilTag detection pipeline are presented. Simulated and experimen- tal results validate key portions of the landing framework. The novel relative estimation scheme is evaluated in an indoor positioning system. Tracking and landing on a moving target is demonstrated in an indoor environment. Outdoor tests also validate the target tracking performance in the presence of wind

    Precision Landing of a Quadrotor UAV on a Moving Target Using Low-Cost Sensors

    Get PDF
    With the use of unmanned aerial vehicles (UAVs) becoming more widespread, a need for precise autonomous landings has arisen. In the maritime setting, precise autonomous landings will help to provide a safe way to recover UAVs deployed from a ship. On land, numerous applications have been proposed for UAV and unmanned ground vehicle (UGV) teams where autonomous docking is required so that the UGVs can either recover or service a UAV in the field. Current state of the art approaches to solving the problem rely on expensive inertial measurement sensors and RTK or differential GPS systems. However, such a solution is not practical for many UAV systems. A framework to perform precision landings on a moving target using low-cost sensors is proposed in this thesis. Vision from a downward facing camera is used to track a target on the landing platform and generate high quality relative pose estimates. The landing procedure consists of three stages. First, a rendezvous stage commands the quadrotor on a path to intercept the target. A target acquisition stage then ensures that the quadrotor is tracking the landing target. Finally, visual measurements of the relative pose to the landing target are used in the target tracking stage where control and estimation are performed in a body-planar frame, without the use of GPS or magnetometer measurements. A comprehensive overview of the control and estimation required to realize the three stage landing approach is presented. Critical parts of the landing framework were implemented on an AscTec Pelican testbed. The AprilTag visual fiducial system is chosen for use as the landing target. Implementation details to improve the AprilTag detection pipeline are presented. Simulated and experimen- tal results validate key portions of the landing framework. The novel relative estimation scheme is evaluated in an indoor positioning system. Tracking and landing on a moving target is demonstrated in an indoor environment. Outdoor tests also validate the target tracking performance in the presence of wind

    RF-based automated UAV orientation and landing system

    Get PDF
    The number of Unmanned Areal Vehicle (UAV) applications is growing tremendously. The most critical applications are operations in use cases like natural disasters and rescue activities. Many of these operations are performed on water scenarios. A standalone niche covering autonomous UAV operation is thus becoming increasingly important. One of the crucial parts of mentioned operations is a technology capable to land an autonomous UAV on a moving platform on top of a water surface. This approach could not be entirely possible without precise UAV positioning. However, conventional strategies that rely on satellite positioning may not always be reliable, due to the existence of accuracy errors given by surrounding environmental conditions, high interferences, or other factors, that could lead to the loss of the UAV. Therefore, the development of independent precise landing technology is essential. The main objective of this thesis is to develop precise landing framework by applying indoor positioning techniques based on RF-anchors to autonomous outdoor UAV operations for cases when a lower accuracy error than the provided by Global Navigation Satellite System (GNSS) is required. In order to analyze the landing technology, a simulation tool was developed. The developed positioning strategy is based on modifications of Gauss-Newton's method, which utilizes as an input parameter the number of anchors, the spacing between them, the initial UAV position, and the Friis-transmission formula to calculate the distance between the anchors and the UAV. As an output, a calculated position of the UAV with an accuracy in the range of tens of centimeters is reached. The simulation campaign shows the dependencies of the effects of the anchor's number and corresponding spacing on positioning accuracy. Also, the simulation campaign shows Gauss-Newton's method parameter value that maximizes the system performance. The results prove that this approach can be applied in a real-life scenario due to achievements of both high accuracy achieved and close to perfect estimated landing trajectory. Keywords: UAV, Positioning, Automatic Landing, Simulatio

    Accurate navigation applied to landing maneuvers on mobile platforms for unmanned aerial vehicles

    Get PDF
    Drones are quickly developing worldwide and in Europe in particular. They represent the future of a high percentage of operations that are currently carried out by manned aviation or satellites. Compared to fixed-wing UAVs, rotary wing UAVs have as advantages the hovering, agile maneuvering and vertical take-off and landing capabilities, so that they are currently the most used aerial robotic platforms. In operations from ships and boats, the final approach and the landing maneuver are the phases of the operation that involves a higher risk and where it is required a higher level of precision in the position and velocity estimation, along with a high level of robustness in the operation. In the framework of the EC-SAFEMOBIL and the REAL projects, this thesis is devoted to the development of a guidance and navigation system that allows completing an autonomous mission from the take-off to the landing phase of a rotary-wing UAV (RUAV). More specifically, this thesis is focused on the development of new strategies and algorithms that provide sufficiently accurate motion estimation during the autonomous landing on mobile platforms without using the GNSS constellations. In one hand, for the phases of the flights where it is not required a centimetric accuracy solution, here it is proposed a new navigation approach that extends the current estimation techniques by using the EGNOS integrity information in the sensor fusion filter. This approach allows improving the accuracy of the estimation solution and the safety of the overall system, and also helps the remote pilot to have a more complete awareness of the operation status while flying the UAV In the other hand, for those flight phases where the accuracy is a critical factor in the safety of the operation, this thesis presents a precise navigation system that allows rotary-wing UAVs to approach and land safely on moving platforms, without using GNSS at any stage of the landing maneuver, and with a centimeter-level accuracy and high level of robustness. This system implements a novel concept where the relative position and velocity between the aerial vehicle and the landing platform can be calculated from a radio-beacon system installed in both the UAV and the landing platform or through the angles of a cable that physically connects the UAV and the landing platform. The use of a cable also incorporates several extra benefits, like increasing the precision in the control of the UAV altitude. It also facilitates to center the UAV right on top of the expected landing position and increases the stability of the UAV just after contacting the landing platform. The proposed guidance and navigation systems have been implemented in an unmanned rotorcraft and a large number of tests have been carried out under different conditions for measuring the accuracy and the robustness of the proposed solution. Results showed that the developed system allows landing with centimeter accuracy by using only local sensors and that the UAV is able to follow a mobile landing platform in multiple trajectories at different velocities

    Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

    Get PDF
    In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing

    Detection and estimation of moving obstacles for a UAV

    Get PDF
    In recent years, research interest in Unmanned Aerial Vehicles (UAVs) has been grown rapidly because of their potential use for a wide range of applications. In this paper, we proposed a vision-based detection and position/velocity estimation of moving obstacle for a UAV. The knowledge of a moving obstacle's state, i.e., position, velocity, is essential to achieve better performance for an intelligent UAV system specially in autonomous navigation and landing tasks. The novelties are: (1) the design and implementation of a localization method using sensor fusion methodology which fuses Inertial Measurement Unit (IMU) signals and Pozyx signals; (2) The development of detection and estimation of moving obstacles method based on on-board vision system. Experimental results validate the effectiveness of the proposed approach. (C) 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved
    corecore