44 research outputs found

    Dynamic Landing of an Autonomous Quadrotor on a Moving Platform in Turbulent Wind Conditions

    Full text link
    Autonomous landing on a moving platform presents unique challenges for multirotor vehicles, including the need to accurately localize the platform, fast trajectory planning, and precise/robust control. Previous works studied this problem but most lack explicit consideration of the wind disturbance, which typically leads to slow descents onto the platform. This work presents a fully autonomous vision-based system that addresses these limitations by tightly coupling the localization, planning, and control, thereby enabling fast and accurate landing on a moving platform. The platform's position, orientation, and velocity are estimated by an extended Kalman filter using simulated GPS measurements when the quadrotor-platform distance is large, and by a visual fiducial system when the platform is nearby. The landing trajectory is computed online using receding horizon control and is followed by a boundary layer sliding controller that provides tracking performance guarantees in the presence of unknown, but bounded, disturbances. To improve the performance, the characteristics of the turbulent conditions are accounted for in the controller. The landing trajectory is fast, direct, and does not require hovering over the platform, as is typical of most state-of-the-art approaches. Simulations and hardware experiments are presented to validate the robustness of the approach.Comment: 7 pages, 8 figures, ICRA2020 accepted pape

    Cooperation of unmanned systems for agricultural applications: A theoretical framework

    Get PDF
    Agriculture 4.0 comprises a set of technologies that combines sensors, information systems, enhanced machinery, and informed management with the objective of optimising production by accounting for variabilities and uncertainties within agricultural systems. Autonomous ground and aerial vehicles can lead to favourable improvements in management by performing in-field tasks in a time-effective way. In particular, greater benefits can be achieved by allowing cooperation and collaborative action among unmanned vehicles, both aerial and ground, to perform in-field operations in precise and time-effective ways. In this work, the preliminary and crucial step of analysing and understanding the technical and methodological challenges concerning the main problems involved is performed. An overview of the agricultural scenarios that can benefit from using collaborative machines and the corresponding cooperative schemes typically adopted in this framework are presented. A collection of kinematic and dynamic models for different categories of autonomous aerial and ground vehicles is provided, which represents a crucial step in understanding the vehicles behaviour when full autonomy is desired. Last, a collection of the state-of-the-art technologies for the autonomous guidance of drones is provided, summarising their peculiar characteristics, and highlighting their advantages and shortcomings with a specific focus on the Agriculture 4.0 framework. A companion paper reports the application of some of these techniques in a complete case study in sloped vineyards, applying the proposed multi-phase collaborative scheme introduced here

    Методичні основи використання БПЛА для контроля забур’яненості посівів

    Get PDF
    Purpose. To work out methodological approaches to the use of quadcopters for weeds assesment. Methods. The shooting was carried out using DJI Phantom Vision 2+ and LadyBug Copper Dot. The LadyBug was shoted in the visible and near-infrared range using the 12-megapixel S100 NDVI UAV-Kit camera with elevations: 20 m, 40 m and 60 m. The DJI Phantom Vision 2+ was shot in the visible range of the GoPro 14 megapixel camera altitudes: 10 m, 15 m, 30 m and 60 m. Decryption of photographs was carried out using the controlled classification method in QGIS and TNTmips programs. Weed accounting was performed on control sites 1m2 by weight method, taking into account their qualitative composition. Results. It is shown that the best results of weed recognition during decoding of images was obtained by the use of controlled classification according to the maximum likelihood method under conditions of shooting from heights up to 40 m. In order to improve the recognition of weeds and separate their image from images of cultivated plants, it is expedient to use the object-oriented analysis. At the stage of sunflower budding, about 30% of the weeds are closed from the remote observation, which led to an automatic underestimation of number of weeds. Conclusions. In order to evaluate the crop contamination, it is possible to successfully use the data from UAVs in a visible range of electromagnetic waves under low altitudes (up to 40 meters) and the use of a controlled classification method for decoding images. For the recognition of weeds, the images in the infrared range do not have advantages over images in the visible range. It is necessary to additionally apply ground-based control of weeds to assess the proportion of "hidden" from remote observation of weeds.Цель. Разработка методических подходов к использованию квадрокоптеров и свободного программного обеспечения для оценки засоренности посевов. Методы. Съемка осуществлялась с помощью коптеров DJI Phantom Vision 2+ и LadyBug в видимом и ближнем инфракрасном диапазоне с высот от 10 до 60 м. Дешифрирование  снимков проводилось по методу контролируемой классификации в программах QGIS и TNTmips. Учет сорняков выполнялся на контрольных участках 1м2 весовым методом с учетом их качественного состава. Результаты. Показано, что для оценки засоренности посевов лучшие результаты позволяет получить использование контролируемой классификации по методу максимального правдоподобия для дешифрирования снимков, получаемых  при съемке с высот не более 40 м. На стадии бутонизации около 30% сорняков было закрыто от дистанционного наблюдения листьями подсолнечника, что приводило к автоматической недооценке засоренности посевов. Выводы. Для оценки засоренности посевов можно успешно использовать данные съемки с БПЛА в видимом диапазоне при  условии съемки с малых высот (до 40 м) и применения метода контролируемой классификации для дешифрирования снимков. Необходим дополнительный наземный контроль засоренности для оценки доли сорняков, скрытых от дистанционного наблюдения листьями культурных растений. Мета. Опрацювання методичних підходів до використання квадрокоптерів та вільного програмного забезпечення для оцінки забур’яненості посівів. Методи. Зйомка здійснювалась за допомогою коптерів DJI Phantom Vision 2+ та LadyBug у видимому та ближньому інфрачервоному діапазонах з висот від 10 до 60 м. Дешифрування знімків проводилось за методом контрольованої класифікації в програмах QGIS та TNTmips. Облік бур’янів виконувався на контрольних ділянках 1м2 ваговим методом з урахуванням якісного їх складу. Результати. Показано, що найкращі результати розпізнавання бур’янів при дешифруванні знімків дозволяє отримати використання контрольованої класифікації за методом максимальної правдоподібності за умов проведення зйомки з висот до 40 м. На стадії бутонізації соняшника близько 30% бур’янів закрито від дистанційного спостереження листям соняшника, що призводило до автоматичної недооцінки забур’яненості. Висновки. Для оцінки забур’яненості посівів можна використовувати дані зйомки з БПЛА у видимому діапазоні електромагнітних хвиль за умов зйомки з малих висот (до 40 м) та застосування методу контрольованої класифікації при дешифруванні знімків. Необхідно додаткова застосовувати наземний контроль забур’яненості для оцінки частки «прихованих» від дистанційного спостереження бур’янів

    Circular formation control of fixed-wing UAVs with constant speeds

    Full text link
    In this paper we propose an algorithm for stabilizing circular formations of fixed-wing UAVs with constant speeds. The algorithm is based on the idea of tracking circles with different radii in order to control the inter-vehicle phases with respect to a target circumference. We prove that the desired equilibrium is exponentially stable and thanks to the guidance vector field that guides the vehicles, the algorithm can be extended to other closed trajectories. One of the main advantages of this approach is that the algorithm guarantees the confinement of the team in a specific area, even when communications or sensing among vehicles are lost. We show the effectiveness of the algorithm with an actual formation flight of three aircraft. The algorithm is ready to use for the general public in the open-source Paparazzi autopilot.Comment: 6 pages, submitted to IROS 201

    Automated crop plant counting from very high-resolution aerial imagery

    Get PDF
    Knowing before harvesting how many plants have emerged and how they are growing is key in optimizing labour and efficient use of resources. Unmanned aerial vehicles (UAV) are a useful tool for fast and cost efficient data acquisition. However, imagery need to be converted into operational spatial products that can be further used by crop producers to have insight in the spatial distribution of the number of plants in the field. In this research, an automated method for counting plants from very high-resolution UAV imagery is addressed. The proposed method uses machine vision—Excess Green Index and Otsu’s method—and transfer learning using convolutional neural networks to identify and count plants. The integrated methods have been implemented to count 10 weeks old spinach plants in an experimental field with a surface area of 3.2 ha. Validation data of plant counts were available for 1/8 of the surface area. The results showed that the proposed methodology can count plants with an accuracy of 95% for a spatial resolution of 8 mm/pixel in an area up to 172 m2. Moreover, when the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%. Finally, a total amount of 170 000 plants in an area of 3.5 ha with an error of 42.5% was computed. The study shows that it is feasible to count individual plants using UAV-based off-the-shelf products and that via machine vision/learning algorithms it is possible to translate image data in non-expert practical information.</p
    corecore