57 research outputs found

    Towards an autonomous vision-based unmanned aerial system againstwildlife poachers

    Get PDF
    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.Peer Reviewe

    Towards an autonomous vision-based unmanned aerial system against wildlife poachers.

    Get PDF
    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing

    An Omnidirectional Aerial Platform for Multi-Robot Manipulation

    Get PDF
    The objectives of this work were the modeling, control and prototyping of a new fully-actuated aerial platform. Commonly, the multirotor aerial platforms are under-actuated vehicles, since the total propellers thrust can not be directed in every direction without inferring a vehicle body rotation. The most common fully-actuated aerial platforms have tilted or tilting rotors that amplify the aerodynamic perturbations between the propellers, reducing the efficiency and the provided thrust. In order to overcome this limitation a novel platform, the ODQuad (OmniDirectional Quadrotor), has been proposed, which is composed by three main parts, the platform, the mobile and rotor frames, that are linked by means of two rotational joints, namely the roll and pitch joints. The ODQuad is able to orient the total thrust by moving only the propellers frame by means of the roll and pitch joints. Kinematic and dynamic models of the proposed multirotor have been derived using the Euler- Lagrange approach and a model-based controller has been designed. The latter is based on two control loops: an outer loop for vehicle position control and an inner one for vehicle orientation and roll-pitch joint control. The effectiveness of the controller has been tested by means of numerical simulations in the MATLAB c SimMechanics environment. In particular, tests in free motion and in object transportation tasks have been carried out. In the transportation task simulation, a momentum based observer is used to estimate the wrenches exchanged between the vehicle and the transported object. The ODQuad concept has been tested also in cooperative manipulation tasks. To this aim, a simulation model was considered, in which multiple ODQuads perform the manipulation of a bulky object with unknown inertial parameters which are identified in the first phase of the simulation. In order to reduce the mechanical stresses due to the manipulation and enhance the system robustness to the environment interactions, two admittance filters have been implemented: an external filter on the object motion and an internal one local for each multirotor. Finally, the prototyping process has been illustrated step by step. In particular, three CAD models have been designed. The ODQuad.01 has been used in the simulations and in a preliminary static analysis that investigated the torque values for a rough sizing of the roll-pitch joint actuators. Since in the ODQuad.01 the components specifications and the related manufacturing techniques have not been taken into account, a successive model, the ODQuad.02, has been designed. The ODQuad.02 design can be developed with aluminum or carbon fiber profiles and 3D printed parts, but each component must be custom manufactured. Finally, in order to shorten the prototype development time, the ODQuad.03 has been created, which includes some components of the off-the-shelf quadrotor Holybro X500 into a novel custom-built mechanical frame

    Intelligent Vision-based Autonomous Ship Landing of VTOL UAVs

    Full text link
    The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the "horizon bar" for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning-based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy

    Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System

    Get PDF
    Autonomously operating UAVs demand a fast localization for navigation, to actively explore unknown areas and to create maps. For pose estimation, many UAV systems make use of a combination of GPS receivers and inertial sensor units (IMU). However, GPS signal coverage may go down occasionally, especially in the close vicinity of objects, and precise IMUs are too heavy to be carried by lightweight UAVs. This and the high cost of high quality IMU motivate the use of inexpensive vision based sensors for localization using visual odometry or visual SLAM (simultaneous localization and mapping) techniques. The first contribution of this thesis is a more general approach to bundle adjustment with an extended version of the projective coplanarity equation which enables us to make use of omnidirectional multi-camera systems which may consist of fisheye cameras that can capture a large field of view with one shot. We use ray directions as observations instead of image points which is why our approach does not rely on a specific projection model assuming a central projection. In addition, our approach allows the integration and estimation of points at infinity, which classical bundle adjustments are not capable of. We show that the integration of far or infinitely far points stabilizes the estimation of the rotation angles of the camera poses. In its second contribution, we employ this approach to bundle adjustment in a highly integrated system for incremental pose estimation and mapping on light-weight UAVs. Based on the image sequences of a multi-camera system our system makes use of tracked feature points to incrementally build a sparse map and incrementally refines this map using the iSAM2 algorithm. Our system is able to optionally integrate GPS information on the level of carrier phase observations even in underconstrained situations, e.g. if only two satellites are visible, for georeferenced pose estimation. This way, we are able to use all available information in underconstrained GPS situations to keep the mapped 3D model accurate and georeferenced. In its third contribution, we present an approach for re-using existing methods for dense stereo matching with fisheye cameras, which has the advantage that highly optimized existing methods can be applied as a black-box without modifications even with cameras that have field of view of more than 180 deg. We provide a detailed accuracy analysis of the obtained dense stereo results. The accuracy analysis shows the growing uncertainty of observed image points of fisheye cameras due to increasing blur towards the image border. Core of the contribution is a rigorous variance component estimation which allows to estimate the variance of the observed disparities at an image point as a function of the distance of that point to the principal point. We show that this improved stochastic model provides a more realistic prediction of the uncertainty of the triangulated 3D points.Autonom operierende UAVs benötigen eine schnelle Lokalisierung zur Navigation, zur Exploration unbekannter Umgebungen und zur Kartierung. Zur Posenbestimmung verwenden viele UAV-Systeme eine Kombination aus GPS-Empfängern und Inertial-Messeinheiten (IMU). Die Verfügbarkeit von GPS-Signalen ist jedoch nicht überall gewährleistet, insbesondere in der Nähe abschattender Objekte, und präzise IMUs sind für leichtgewichtige UAVs zu schwer. Auch die hohen Kosten qualitativ hochwertiger IMUs motivieren den Einsatz von kostengünstigen bildgebenden Sensoren zur Lokalisierung mittels visueller Odometrie oder SLAM-Techniken zur simultanen Lokalisierung und Kartierung. Im ersten wissenschaftlichen Beitrag dieser Arbeit entwickeln wir einen allgemeineren Ansatz für die Bündelausgleichung mit einem erweiterten Modell für die projektive Kollinearitätsgleichung, sodass auch omnidirektionale Multikamerasysteme verwendet werden können, welche beispielsweise bestehend aus Fisheyekameras mit einer Aufnahme einen großen Sichtbereich abdecken. Durch die Integration von Strahlrichtungen als Beobachtungen ist unser Ansatz nicht von einem kameraspezifischen Abbildungsmodell abhängig solange dieses der Zentralprojektion folgt. Zudem erlaubt unser Ansatz die Integration und Schätzung von unendlich fernen Punkten, was bei klassischen Bündelausgleichungen nicht möglich ist. Wir zeigen, dass durch die Integration weit entfernter und unendlich ferner Punkte die Schätzung der Rotationswinkel der Kameraposen stabilisiert werden kann. Im zweiten Beitrag verwenden wir diesen entwickelten Ansatz zur Bündelausgleichung für ein System zur inkrementellen Posenschätzung und dünnbesetzten Kartierung auf einem leichtgewichtigen UAV. Basierend auf den Bildsequenzen eines Mulitkamerasystems baut unser System mittels verfolgter markanter Bildpunkte inkrementell eine dünnbesetzte Karte auf und verfeinert diese inkrementell mittels des iSAM2-Algorithmus. Unser System ist in der Lage optional auch GPS Informationen auf dem Level von GPS-Trägerphasen zu integrieren, wodurch sogar in unterbestimmten Situation - beispielsweise bei nur zwei verfügbaren Satelliten - diese Informationen zur georeferenzierten Posenschätzung verwendet werden können. Im dritten Beitrag stellen wir einen Ansatz zur Verwendung existierender Methoden für dichtes Stereomatching mit Fisheyekameras vor, sodass hoch optimierte existierende Methoden als Black Box ohne Modifzierungen sogar mit Kameras mit einem Gesichtsfeld von mehr als 180 Grad verwendet werden können. Wir stellen eine detaillierte Genauigkeitsanalyse basierend auf dem Ergebnis des dichten Stereomatchings dar. Die Genauigkeitsanalyse zeigt, wie stark die Genauigkeit beobachteter Bildpunkte bei Fisheyekameras zum Bildrand aufgrund von zunehmender Unschärfe abnimmt. Das Kernstück dieses Beitrags ist eine Varianzkomponentenschätzung, welche die Schätzung der Varianz der beobachteten Disparitäten an einem Bildpunkt als Funktion von der Distanz dieses Punktes zum Hauptpunkt des Bildes ermöglicht. Wir zeigen, dass dieses verbesserte stochastische Modell eine realistischere Prädiktion der Genauigkeiten der 3D Punkte ermöglicht

    Radicalization of Airspace Security: Prospects and Botheration of Drone Defense System Technology

    Get PDF
    The development of a comprehensive and decisive drone defense integrated control system that can provide maximum security is crucial for maintaining territorial integrity and accelerating smart aerial mobility to sustain the emerging drone transportation system (DTS) for priority-based logistics and mobile communication. This study explores recent developments in the design of robust drone defense control systems that can observe and respond not only to drone attacks inside and outside a facility but also to equipment data such as CCTV security control on the ground and security sensors in the facility at a glance. Also, it considered DDS strategies, schema, and innovative security setups in different regions. Finally, open research issues in DDs designs are discussed, and useful recommendations are provided. Effective means for drone source authentication, delivery package verification, operator authorization, and dynamic scenario-specific engagement are solicited for comprehensive DDS design for maximum security Received: 2023-03-07 Revised: 2023-04-2

    RF-based automated UAV orientation and landing system

    Get PDF
    The number of Unmanned Areal Vehicle (UAV) applications is growing tremendously. The most critical applications are operations in use cases like natural disasters and rescue activities. Many of these operations are performed on water scenarios. A standalone niche covering autonomous UAV operation is thus becoming increasingly important. One of the crucial parts of mentioned operations is a technology capable to land an autonomous UAV on a moving platform on top of a water surface. This approach could not be entirely possible without precise UAV positioning. However, conventional strategies that rely on satellite positioning may not always be reliable, due to the existence of accuracy errors given by surrounding environmental conditions, high interferences, or other factors, that could lead to the loss of the UAV. Therefore, the development of independent precise landing technology is essential. The main objective of this thesis is to develop precise landing framework by applying indoor positioning techniques based on RF-anchors to autonomous outdoor UAV operations for cases when a lower accuracy error than the provided by Global Navigation Satellite System (GNSS) is required. In order to analyze the landing technology, a simulation tool was developed. The developed positioning strategy is based on modifications of Gauss-Newton's method, which utilizes as an input parameter the number of anchors, the spacing between them, the initial UAV position, and the Friis-transmission formula to calculate the distance between the anchors and the UAV. As an output, a calculated position of the UAV with an accuracy in the range of tens of centimeters is reached. The simulation campaign shows the dependencies of the effects of the anchor's number and corresponding spacing on positioning accuracy. Also, the simulation campaign shows Gauss-Newton's method parameter value that maximizes the system performance. The results prove that this approach can be applied in a real-life scenario due to achievements of both high accuracy achieved and close to perfect estimated landing trajectory. Keywords: UAV, Positioning, Automatic Landing, Simulatio

    Autonomous environmental protection drone

    Get PDF
    During the summer, forest fires are the main reason for deforestation and the damage caused to homes and property in different communities around the world. The use of Unmanned Aerial Vehicles (UAVs, and also known as drones) applications has increased in recent years, making them an excellent solution for difficult tasks such as wildlife conservation and forest fire prevention. A forest fire detection system can be an answer to these tasks. Using a visual camera and a Convolutional Neural Network (CNN) for image processing with an UAV can result in an efficient fire detection system. However, in order to be able to have a fully autonomous system, without human intervention, for 24-hour fire observation and detection in a given geographical area, it requires a platform and automatic recharging procedures. This dissertation combines the use of technologies such as CNNs, Real Time Kinematics (RTK) and Wireless Power Transfer (WPT) with an on-board computer and software, resulting in a fully automated system to make forest surveillance more efficient and, in doing so, reallocating human resources to other locations where they are most needed.Durante o verão, os incêndios florestais constituem a principal razão do desflorestamento e dos danos causados às casas e aos bens das diferentes comunidades de todo o mundo. A utilização de veículos aéreos não tripulados (VANTs), em inglês denominados por Unmanned Aerial Vehicles (UAVs) ou Drones, aumentou nos últimos anos, tornando-os uma excelente solução para tarefas difíceis como a conservação da vida selvagem e prevenção de incêndios florestais. Um sistema de deteção de incêndio florestal pode ser uma resposta para essas tarefas. Com a utilização de uma câmara visual e uma Rede Neuronal Convolucional (RNC) para processamento de imagem com um UAV pode resultar num eficiente sistema de deteção de incêndio. No entanto, para que seja possível ter um sistema completamente autónomo, sem intervenção humana, para observação e deteção de incêndios durante 24 horas, numa dada área geográfica, requer uma plataforma e procedimentos de recarga automática. Esta dissertação reúne o uso de tecnologias como RNCs, posicionamento cinemático em tempo real (RTK) e transferência de energia sem fios (WPT) com um computador e software de bordo, resultando num sistema totalmente automatizado para tornar a vigilância florestal mais eficiente e, ao fazê-lo, realocando recursos humanos para outros locais, onde estes são mais necessários
    corecore