133 research outputs found
Towards autonomous landing on a moving vessel through fiducial markers
This paper propose an autonomous landing method for unmanned aerial vehicles (UAVs), aiming to address those situations in which the landing pad is the deck of a ship. Fiducial marker are used to obtain the six-degrees of freedom (DOF) relative-pose of the UAV to the landing pad. In order to compensate interruptions of the video stream, an extended Kalman filter (EKF) is used to estimate the ship's current position with reference to its last known one, just using the odometry and the inertial data. Due to the difficulty of testing the proposed algorithm in the real world, synthetic simulations have been performed on a robotic test-bed comprising the AR Drone 2.0 and the Husky A200. The results show the EKF performs well enough in providing accurate information to direct the UAV in proximity of the other vehicle such that the marker becomes visible again. Due to the use of inertial measurements only in the data fusion process, this solution can be adopted in indoor navigation scenarios, when a global positioning system is not available
Vision-Based Autonomous Landing of a Quadrotor on the Perturbed Deck of an Unmanned Surface Vehicle
Autonomous landing on the deck of an unmanned surface vehicle (USV) is still a major challenge for unmanned aerial vehicles (UAVs). In this paper, a fiducial marker is located on the platform so as to facilitate the task since it is possible to retrieve its six-degrees of freedom relative-pose in an easy way. To compensate interruption in the marker’s observations, an extended Kalman filter (EKF) estimates the current USV’s position with reference to the last known position. Validation experiments have been performed in a simulated environment under various marine conditions. The results confirmed that the EKF provides estimates accurate enough to direct the UAV in proximity of the autonomous vessel such that the marker becomes visible again. Using only the odometry and the inertial measurements for the estimation, this method is found to be applicable even under adverse weather conditions in the absence of the global positioning system
Autonomous Drone Landings on an Unmanned Marine Vehicle using Deep Reinforcement Learning
This thesis describes with the integration of an Unmanned Surface Vehicle (USV) and an Unmanned Aerial Vehicle (UAV, also commonly known as drone) in a single Multi-Agent System (MAS). In marine robotics, the advantage offered by a MAS consists of exploiting the key features of a single robot to compensate for the shortcomings in the other. In this way, a USV can serve as the landing platform to alleviate the need for a UAV to be airborne for long periods time, whilst the latter can increase the overall environmental awareness thanks to the possibility to cover large portions of the prevailing environment with a camera (or more than one) mounted on it. There are numerous potential applications in which this system can be used, such as deployment in search and rescue missions, water and coastal monitoring, and reconnaissance and force protection, to name but a few.
The theory developed is of a general nature. The landing manoeuvre has been accomplished mainly identifying, through artificial vision techniques, a fiducial marker placed on a flat surface serving as a landing platform. The raison d'etre for the thesis was to propose a new solution for autonomous landing that relies solely on onboard sensors and with minimum or no communications between the vehicles. To this end, initial work solved the problem while using only data from the cameras mounted on the in-flight drone. In the situation in which the tracking of the marker is interrupted, the current position of the USV is estimated and integrated into the control commands. The limitations of classic control theory used in this approached suggested the need for a new solution that empowered the flexibility of intelligent methods, such as fuzzy logic or artificial neural networks. The recent achievements obtained by deep reinforcement learning (DRL) techniques in end-to-end control in playing the Atari video-games suite represented a fascinating while challenging new way to see and address the landing problem. Therefore, novel architectures were designed for approximating the action-value function of a Q-learning algorithm and used to map raw input observation to high-level navigation actions. In this way, the UAV learnt how to land from high latitude without any human supervision, using only low-resolution grey-scale images and with a level of accuracy and robustness. Both the approaches have been implemented on a simulated test-bed based on Gazebo simulator and the model of the Parrot AR-Drone. The solution based on DRL was further verified experimentally using the Parrot Bebop 2 in a series of trials. The outcomes demonstrate that both these innovative methods are both feasible and practicable, not only in an outdoor marine scenario but also in indoor ones as well
Cooperative heterogeneous robots for autonomous insects trap monitoring system in a precision agriculture scenario
The recent advances in precision agriculture are due to the emergence of modern robotics systems. For instance, unmanned aerial systems (UASs) give new possibilities that advance the solution of existing problems in this area in many different aspects. The reason is due to these platforms’ ability to perform activities at varying levels of complexity. Therefore, this research presents a multiple-cooperative robot solution for UAS and unmanned ground vehicle (UGV) systems for their joint inspection of olive grove inspect traps. This work evaluated the UAS and UGV vision-based navigation based on a yellow fly trap fixed in the trees to provide visual position data using the You Only Look Once (YOLO) algorithms. The experimental setup evaluated the fuzzy control algorithm applied to the UAS to make it reach the trap efficiently. Experimental tests were conducted in a realistic simulation environment using a robot operating system (ROS) and CoppeliaSim platforms to verify the methodology’s performance, and all tests considered specific real-world environmental conditions. A search and landing algorithm based on augmented reality tag (AR-Tag) visual processing was evaluated to allow for the return and landing of the UAS to the UGV base. The outcomes obtained in this work demonstrate the robustness and feasibility of the multiple-cooperative robot architecture for UGVs and UASs applied in the olive inspection scenario.The authors would like to thank the Foundation for Science and Technology (FCT, Portugal) for financial support through national funds FCT/MCTES (PIDDAC) to CeDRI (UIDB/05757/2020 and UIDP/05757/2020) and SusTEC (LA/P/0007/2021). In addition, the authors would like to thank the following Brazilian Agencies CEFET-RJ, CAPES, CNPq, and FAPERJ. In addition, the authors also want to thank the Research Centre in Digitalization and Intelligent Robotics (CeDRI), Instituto Politécnico de Braganca (IPB) - Campus de Santa Apolonia, Portugal, Laboratório Associado para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Portugal, INESC Technology and Science - Porto, Portugal and Universidade de Trás-os-Montes e Alto Douro - Vila Real, Portugal. This work was carried out under the Project “OleaChain: Competências para a sustentabilidade e inovação da cadeia de valor do olival tradicional no Norte Interior de Portugal” (NORTE-06-3559-FSE-000188), an operation used to hire highly qualified human resources, funded by NORTE 2020 through the European Social Fund (ESF).info:eu-repo/semantics/publishedVersio
Vision-based Marker-less Landing of a UAS on Moving Ground Vehicle
In recent years the use of unmanned air systems (UAS) has seen extreme growth. These small, often inexpensive platforms have been used to aid in tasks such as search and rescue, medicinal deliveries, disaster relief and more. In many use cases UAS work alongside unmanned ground vehicles (UGVs) to complete autonomous tasks. For end-to-end autonomous cooperation, the UAS needs to be able to autonomously take off and land on the UGV. Current autonomous landing solutions often use fiducial markers to aid in localizing the UGV relative to the UAS, an external ground computer to aid in computation, or gimbaled cameras on-board the UAS. This thesis seeks to demonstrate a vision-based autonomous landing system that does not rely on the use of fiducial markers, completes all computations on-board the UAS, and uses a fixed, non-gimbaled camera. Algorithms are tailored towards low size, weight, and power constraints as all compute and sensing components weigh less than 100 grams. The foundation of this thesis extends upon current efforts by localizing the UGV relative to the UAS using neural network object detection and camera intrinsic properties instead of common place fiducial markers. An object detection neural network is used to detect the UGV within an image captured by the camera on-board the UAS. Then a localization algorithm utilizes the UGV’s pixel position within the image to estimate the UGV’s position relative to the UAS. This estimated position of the UGV will be passed into a command generator that sends setpoints to the on-board PX4 flight control unit (FCU). This autonomous landing system was developed and validated within a high-fidelity simulation environment before conducting outdoor experiments
System-Level Analysis of Autonomous UAV Landing Sensitivities in GPS-Denied Environments
This paper presents an analysis of the navigation accuracy of an fixed-wing Unmanned Aerial Vehicle (UAV) landing on a aircraft carrier. The UAV is equipped with typical sensors used in landing scenarios. Data from the Office of Naval Research is used to accurately capture the behavior of the aircraft carrier. Through simulation, the position and orientation of both the UAV and carrier are estimated. The quality of the UAV’s sensors are varied to determine the sensitivity of these estimates to sensor accuracy. The system’s sensitivity to GPS signals and visual markers on the carrier is also analyzed. These results allow designers to choose the most economical sensors for landing systems that provide a safe and accurate landing
Reliable Navigation for SUAS in Complex Indoor Environments
Indoor environments are a particular challenge for Unmanned Aerial Vehicles (UAVs). Effective navigation through these GPS-denied environments require alternative localization systems, as well as methods of sensing and avoiding obstacles while remaining on-task. Additionally, the relatively small clearances and human presence characteristic of indoor spaces necessitates a higher level of precision and adaptability than is common in traditional UAV flight planning and execution. This research blends the optimization of individual technologies, such as state estimation and environmental sensing, with system integration and high-level operational planning.
The combination of AprilTag visual markers, multi-camera Visual Odometry, and IMU data can be used to create a robust state estimator that describes position, velocity, and rotation of a multicopter within an indoor environment. However these data sources have unique, nonlinear characteristics that should be understood to effectively plan for their usage in an automated environment. The research described herein begins by analyzing the unique characteristics of these data streams in order to create a highly-accurate, fault-tolerant state estimator.
Upon this foundation, the system built, tested, and described herein uses Visual Markers as navigation anchors, visual odometry for motion estimation and control, and then uses depth sensors to maintain an up-to-date map of the UAV\u27s immediate surroundings. It develops and continually refines navigable routes through a novel combination of pre-defined and sensory environmental data. Emphasis is put on the real-world development and testing of the system, through discussion of computational resource management and risk reduction
Contribuciones al uso de marcadores para Navegación Autónoma y Realidad Aumentada
Square planar markers are a widely used tools for localization and tracking due to their low cost and high performance. Many applications in Robotics, Unmanned Vehicles and Augmented Reality employ these markers for camera pose estimation with high accuracy. Nevertheless, marker-based systems are affected by several factors that limit their performance. First, the marker detection process is a time-consuming task, which is intensified as the image size increases. As a consequence, the current high-resolution cameras has weakened the processing efficiency of traditional marker systems. Second, marker detection is affected by the presence of noise, blurring and occlusion. The movement of the camera produces image blurriness, generated even by small movements. Furthermore, the marker may be partially or completely occluded in the image, so that it is no longer detected. This thesis deals with the above limitations, proposing novel methodologies and strategies for successful marker detection improving both the efficiency and robustness of these systems. First, a novel multi-scale approach has been developed to speed up the marker detection process. The method takes advantage of the different resolutions at which the image is represented to predict at runtime the optimal scale for detection and identification, as well as following a corner upsampling strategy necessary for an accurate pose estimation. Second, we introduce a new marker design, Fractal Marker, which using a novel keypoint-based method achieves detection even under severe occlusion, while allowing detection over a wider range of distance than traditional markers. Finally, we propose a new marker detection strategy based on Discriminative Correlation Filters (DCF), where the marker and its corners represented in the frequency domain perform more robust and faster detections than state-ofthe- art methods, even under extreme blur conditions.Los marcadores planos cuadrados son una de las herramientas ampliamente utilizadas para la localización y el tracking debido a su bajo coste y su alto rendimiento. Muchas aplicaciones en Robótica, Vehículos no Tripulados y Realidad Aumentada emplean estos marcadores para estimar con alta precisión la posición de la cámara. Sin embargo, los sistemas basados en marcadores se ven afectados por varios factores que limitan su rendimiento. En primer lugar, el proceso de detección de marcadores es una tarea que requiere mucho tiempo y este incrementa a medida que aumenta el tamaño de la imagen. En consecuencia, las actuales cámaras de alta resolución han debilitado la eficacia del procesamiento de los sistemas de marcadores tradicionales. Por otra parte, la detección de marcadores se ve afectada por la presencia de ruido, desenfoque y oclusión. El movimiento de la cámara produce desenfoque de la imagen, generado incluso por pequeños movimientos. Además, el marcador puede aparecer en la imagen parcial o completamente ocluido, dejando de ser detectado. Esta tesis aborda las limitaciones anteriores, proponiendo metodologías y estrategias novedosas para la correcta detección de marcadores, mejorando así tanto la eficiencia como la robustez de estos sistemas. En primer lugar, se ha desarrollado un novedoso enfoque multiescala para acelerar el proceso de detección de marcadores. El método aprovecha las diferentes resoluciones en las que la imagen está representada para predecir en tiempo de ejecución la escala óptima para la detección e identificación, a la vez que sigue una estrategia de upsampling de las esquinas necesaria para estimar la pose con precisión. En segundo lugar, introducimos un nuevo diseño de marcador, Fractal Marker, que, mediante un método basado en keypoints, logra detecciones incluso en casos de oclusión extrema, al tiempo que permite la detección en un rango de distancias más amplio que los marcadores tradicionales. Por último, proponemos una nueva estrategia de detección de marcadores basada en Discriminate Correlation Filters (DCF), donde el marcador y sus esquinas representadas en el dominio de la frecuencia realizan detecciones más robustas y rápidas que los métodos de referencia, incluso bajo condiciones extremas de emborronamiento
- …