115 research outputs found

    A Comprehensive Review of AI-enabled Unmanned Aerial Vehicle: Trends, Vision , and Challenges

    Full text link
    In recent years, the combination of artificial intelligence (AI) and unmanned aerial vehicles (UAVs) has brought about advancements in various areas. This comprehensive analysis explores the changing landscape of AI-powered UAVs and friendly computing in their applications. It covers emerging trends, futuristic visions, and the inherent challenges that come with this relationship. The study examines how AI plays a role in enabling navigation, detecting and tracking objects, monitoring wildlife, enhancing precision agriculture, facilitating rescue operations, conducting surveillance activities, and establishing communication among UAVs using environmentally conscious computing techniques. By delving into the interaction between AI and UAVs, this analysis highlights the potential for these technologies to revolutionise industries such as agriculture, surveillance practices, disaster management strategies, and more. While envisioning possibilities, it also takes a look at ethical considerations, safety concerns, regulatory frameworks to be established, and the responsible deployment of AI-enhanced UAV systems. By consolidating insights from research endeavours in this field, this review provides an understanding of the evolving landscape of AI-powered UAVs while setting the stage for further exploration in this transformative domain

    LiDAR based multi-sensor fusion for localization, mapping, and tracking

    Get PDF
    Viimeisen vuosikymmenen aikana täysin itseohjautuvien ajoneuvojen kehitys on herättänyt laajaa kiinnostusta niin teollisuudessa kuin tiedemaailmassakin, mikä on merkittävästi edistänyt tilannetietoisuuden ja anturiteknologian kehitystä. Erityisesti LiDAR-anturit ovat nousseet keskeiseen rooliin monissa havainnointijärjestelmissä niiden tarjoaman pitkän kantaman havaintokyvyn, tarkan 3D-etäisyystiedon ja luotettavan suorituskyvyn ansiosta. LiDAR-teknologian kehittyminen on mahdollistanut entistä luotettavampien ja kustannustehokkaampien antureiden käytön, mikä puolestaan on osoittanut suurta potentiaalia parantaa laajasti käytettyjen kuluttajatuotteiden tilannetietoisuutta. Uusien LiDAR-antureiden hyödyntäminen tarjoaa tutkijoille monipuolisen valikoiman tehokkaita työkaluja, joiden avulla voidaan ratkaista paikannuksen, kartoituksen ja seurannan haasteita nykyisissä havaintojärjestelmissä. Tässä väitöskirjassa tutkitaan LiDAR-pohjaisia sensorifuusioalgoritmeja. Tutkimuksen pääpaino on tiheässä kartoituksessa ja globaalissa paikan-nuksessa erilaisten LiDAR-anturien avulla. Tutkimuksessa luodaan kattava tietokanta uusien LiDAR-, IMU- ja kamera-antureiden tuottamasta datasta. Tietokanta on välttämätön kehittyneiden anturifuusioalgoritmien ja yleiskäyttöisten paikannus- ja kartoitusalgoritmien kehittämiseksi. Tämän lisäksi väitöskirjassa esitellään innovatiivisia menetelmiä globaaliin paikannukseen erilaisissa ympäristöissä. Esitellyt menetelmät kartoituksen tarkkuuden ja tilannetietoisuuden parantamiseksi ovat muun muassa modulaarinen monen LiDAR-anturin odometria ja kartoitus, toimintavarma multimodaalinen LiDAR-inertiamittau-sjärjestelmä ja tiheä kartoituskehys. Tutkimus integroi myös kiinteät LiDAR -anturit kamerapohjaisiin syväoppimismenetelmiin kohteiden seurantaa varten parantaen kartoituksen tarkkuutta dynaamisissa ympäristöissä. Näiden edistysaskeleiden avulla autonomisten järjestelmien luotettavuutta ja tehokkuutta voidaan merkittävästi parantaa todellisissa käyttöympäristöissä. Väitöskirja alkaa esittelemällä innovatiiviset anturit ja tiedonkeruualustan. Tämän jälkeen esitellään avoin tietokanta, jonka avulla voidaan arvioida kehittyneitä paikannus- ja kartoitusalgoritmeja hyödyntäen ainutlaatuista perustotuuden kehittämismenetelmää. Työssä käsitellään myös kahta haastavaa paikannusympäristöä: metsä- ja kaupunkiympäristöä. Lisäksi tarkastellaan kohteen seurantatehtäviä sekä kameraettä LiDAR-tekniikoilla ihmisten ja pienten droonien seurannassa. ---------------------- The development of fully autonomous driving vehicles has become a key focus for both industry and academia over the past decade, fostering significant progress in situational awareness abilities and sensor technology. Among various types of sensors, the LiDAR sensor has emerged as a pivotal component in many perception systems due to its long-range detection capabilities, precise 3D range information, and reliable performance in diverse environments. With advancements in LiDAR technology, more reliable and cost-effective sensors have shown great potential for improving situational awareness abilities in widely used consumer products. By leveraging these novel LiDAR sensors, researchers now have a diverse set of powerful tools to effectively tackle the persistent challenges in localization, mapping, and tracking within existing perception systems. This thesis explores LiDAR-based sensor fusion algorithms to address perception challenges in autonomous systems, with a primary focus on dense mapping and global localization using diverse LiDAR sensors. The research involves the integration of novel LiDARs, IMU, and camera sensors to create a comprehensive dataset essential for developing advanced sensor fusion and general-purpose localization and mapping algorithms. Innovative methodologies for global localization across varied environments are introduced. These methodologies include a robust multi-modal LiDAR inertial odometry and a dense mapping framework, which enhance mapping precision and situational awareness. The study also integrates solid-state LiDARs with camera-based deep-learning techniques for object tracking, refining mapping accuracy in dynamic environments. These advancements significantly enhance the reliability and efficiency of autonomous systems in real-world scenarios. The thesis commences with an introduction to innovative sensors and a data collection platform. It proceeds by presenting an open-source dataset designed for the evaluation of advanced SLAM algorithms, utilizing a unique ground-truth generation method. Subsequently, the study tackles two localization challenges in forest and urban environments. Furthermore, it highlights the MM-LOAM dense mapping framework. Additionally, the research explores object-tracking tasks, employing both camera and LiDAR technologies for human and micro UAV tracking

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Feature Papers of Drones - Volume I

    Get PDF
    [EN] The present book is divided into two volumes (Volume I: articles 1–23, and Volume II: articles 24–54) which compile the articles and communications submitted to the Topical Collection ”Feature Papers of Drones” during the years 2020 to 2022 describing novel or new cutting-edge designs, developments, and/or applications of unmanned vehicles (drones). Articles 1–8 are devoted to the developments of drone design, where new concepts and modeling strategies as well as effective designs that improve drone stability and autonomy are introduced. Articles 9–16 focus on the communication aspects of drones as effective strategies for smooth deployment and efficient functioning are required. Therefore, several developments that aim to optimize performance and security are presented. In this regard, one of the most directly related topics is drone swarms, not only in terms of communication but also human-swarm interaction and their applications for science missions, surveillance, and disaster rescue operations. To conclude with the volume I related to drone improvements, articles 17–23 discusses the advancements associated with autonomous navigation, obstacle avoidance, and enhanced flight plannin

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    UAV vision system: Application in electric line following and 3D reconstruction of associated terrain

    Get PDF
    Abstract. In this work, a set of vision techniques applied to a UAV (Unmanned Aerial Vehicle) images is presented. The techniques are used to detect electrical lines and towers which are used in vision based navigation and for 3D associated terrain reconstruction. The developed work aims to be a previous stage for autonomous electrical infrastructure inspection. This work is divided in four stages: power line detection, transmission tower detection, UAV navigation and 3D reconstruction of associated terrain. In the first stage, a study of algorithms for line detection was performed. After that, an algorithm for line detection called CBS (Circle Based Search) which presented good results with azimuthal images was developed. This method offers a shorter response time in comparison with the Hough transform and the LSD (Line Segment Detector) algorithm, and a similar response to EDLines which is one of the fastest and most trustful algorithms for line detection. Given that most of the works related with line detection are focused in straight lines, an algorithm for catenary detection based on a concatenation process was developed. This algorithm was validated using real power line inspection images with catenaries. Additionally, in this work a tower detection method based on a feature descriptor with the capacity of detecting towers in times close to 100 ms was developed. Navigation over power lines by using UAVs requires a lot of tests because of the risk of failures and accidents. For this reason, a virtual environment for real time UAV simulation of visual navigation was developed by using ROS (Robot Operative System), which is open source. An onboard visual navigation system for UAV was also developed. This system allows the UAV to navigate following a power line in real sceneries by using the developed techniques. In the last part a 3D tower reconstruction that uses images obtained with UAVs is presented.}, keywordenglish = {line detection, inspection, navigation, tower detection, onboard vision system, UAV.Este trabajo presenta un conjunto de técnicas de visión aplicadas a imágenes adquiridas mediante UAVs (vehículos aéreos no tripulados). Las técnicas se usan para la detección de líneas y torres eléctricas las cuales son usadas en un proceso de navegación basada en vision y para la reconstrucción de terreno asociado en 3D. El proyecto está planteado como una fase previa a un proceso de inspección de infraestructura electrica. El trabajo se encuentra dividido en cuatro partes: la detección de líneas de transmisión eléctrica, la detección de torres de transmisión, la navegación de UAVs y la reconstrucción tridimensional de objetos tales como torres de transmisión. En primer lugar se realizó un estudio de los algoritmos para la detección de líneas en imágenes. Posteriormente se desarrolló un algoritmo para la detección de líneas llamado CBS (Búsqueda basada en círculos), el cual tiene buenos resultados en imágenes azimutales de líneas eléctricas. Este método ofrece un tiempo de respuesta más corto que la transformada de Houg o el algoritmo LSD (line segment detector), y un tiempo similar a EDLines el cual es uno de los algoritmos más rápidos y confiables para detectar líneas. Debido a que la mayoría de trabajos relacionados con detección de líneas se enfocan en líneas rectas, se desarrolló un algoritmo para detectar catenarias que cuenta con un proceso de concatenación de segmentos, esta técnica fue validada con imágenes de catenarias obtenidas en inspecciones reales de infraestructura eléctrica. Adicionalmente se desarrolló un algoritmo basado en descriptores de características para la detección de torres de transmisión con la intención de facilitar los procesos de navegación e inspección. El proceso desarrollado ha permitido detectar torres en videos en tiempos cercanos a 100 ms. La navegación sobre líneas eléctricas mediante UAVs requiere una gran cantidad de pruebas debido al riesgo de fallos y accidentes, por esto se realizó un ambiente virtual para la simulación en tiempo real de técnicas de navegación basadas en características visuales haciendo uso del entorno de ROS (Robot Operative System), el cual es de código abierto. Se desarrollo un sistema de navegación a bordo de un UAV el cual permitio obtener resultados de navegación autónoma en el seguimiento de líneas en escenarios reales usando las técnicas desarrolladas. En la parte final del trabajo se realizó una reconstrucción 3D de torres electricas haciendo uso de imagenes adquiridas con UAVs.Doctorad
    corecore