39 research outputs found

    Thermal infrared video stabilization for aerial monitoring of active wildfires

    Get PDF
    Measuring wildland fire behavior is essential for fire science and fire management. Aerial thermal infrared (TIR) imaging provides outstanding opportunities to acquire such information remotely. Variables such as fire rate of spread (ROS), fire radiative power (FRP), and fireline intensity may be measured explicitly both in time and space, providing the necessary data to study the response of fire behavior to weather, vegetation, topography, and firefighting efforts. However, raw TIR imagery acquired by unmanned aerial vehicles (UAVs) requires stabilization and georeferencing before any other processing can be performed. Aerial video usually suffers from instabilities produced by sensor movement. This problem is especially acute near an active wildfire due to fire-generated turbulence. Furthermore, the nature of fire TIR video presents some specific challenges that hinder robust interframe registration. Therefore, this article presents a software-based video stabilization algorithm specifically designed for TIR imagery of forest fires. After a comparative analysis of existing image registration algorithms, the KAZE feature-matching method was selected and accompanied by pre- and postprocessing modules. These included foreground histogram equalization and a multireference framework designed to increase the algorithm's robustness in the presence of missing or faulty frames. The performance of the proposed algorithm was validated in a total of nine video sequences acquired during field fire experiments. The proposed algorithm yielded a registration accuracy between 10 and 1000x higher than other tested methods, returned 10x more meaningful feature matches, and proved robust in the presence of faulty video frames. The ability to automatically cancel camera movement for every frame in a video sequence solves a key limitation in data processing pipelines and opens the door to a number of systematic fire behavior experimental analyses. Moreover, a completely automated process supports the development of decision support tools that can operate in real time during an emergency

    Augmented Perception for Agricultural Robots Navigation

    Full text link
    [EN] Producing food in a sustainable way is becoming very challenging today due to the lack of skilled labor, the unaffordable costs of labor when available, and the limited returns for growers as a result of low produce prices demanded by big supermarket chains in contrast to ever-increasing costs of inputs such as fuel, chemicals, seeds, or water. Robotics emerges as a technological advance that can counterweight some of these challenges, mainly in industrialized countries. However, the deployment of autonomous machines in open environments exposed to uncertainty and harsh ambient conditions poses an important defiance to reliability and safety. Consequently, a deep parametrization of the working environment in real time is necessary to achieve autonomous navigation. This article proposes a navigation strategy for guiding a robot along vineyard rows for field monitoring. Given that global positioning cannot be granted permanently in any vineyard, the strategy is based on local perception, and results from fusing three complementary technologies: 3D vision, lidar, and ultrasonics. Several perception-based navigation algorithms were developed between 2015 and 2019. After their comparison in real environments and conditions, results showed that the augmented perception derived from combining these three technologies provides a consistent basis for outlining the intelligent behavior of agricultural robots operating within orchards.This work was supported by the European Union Research and Innovation Programs under Grant N. 737669 and Grant N. 610953. The associate editor coordinating the review of this article and approving it for publication was Dr. Oleg Sergiyenko.Rovira Más, F.; Sáiz Rubio, V.; Cuenca-Cuenca, A. (2021). Augmented Perception for Agricultural Robots Navigation. IEEE Sensors Journal. 21(10):11712-11727. https://doi.org/10.1109/JSEN.2020.3016081S1171211727211

    Robust airborne 3D visual simultaneous localisation and mapping

    Get PDF
    The aim of this thesis is to present robust solutions to technical problems of airborne three-dimensional (3D) Visual Simultaneous Localisation And Mapping (VSLAM). These solutions are developed based on a stereovision system available onboard Unmanned Aerial Vehicles (UAVs). The proposed airborne VSLAM enables unmanned aerial vehicles to construct a reliable map of an unknown environment and localise themselves within this map without any user intervention. Current research challenges related to Airborne VSLAM include the visual processing through invariant feature detectors/descriptors, efficient mapping of large environments and cooperative navigation and mapping of complex environments. Most of these challenges require scalable representations, robust data association algorithms, consistent estimation techniques, and fusion of different sensor modalities. To deal with these challenges, seven Chapters are presented in this thesis as follows: Chapter 1 introduces UAVs, definitions, current challenges and different applications. Next, in Chapter 2 we present the main sensors used by UAVs during navigation. Chapter 3 presents an important task for autonomous navigation which is UAV localisation. In this chapter, some robust and optimal approaches for data fusion are proposed with performance analysis. After that, UAV map building is presented in Chapter 4. This latter is divided into three parts. In the first part, a new imaging alternative technique is proposed to extract and match a suitable number of invariant features. The second part presents an image mosaicing algorithm followed by a super-resolution approach. In the third part, we propose a new feature detector and descriptor that is fast, robust and detect suitable number of features to solve the VSLAM problem. A complete Airborne Visual Simultaneous Localisation and Mapping (VSLAM) solution based on a stereovision system is presented in Chapter (5). Robust data association filters with consistency and observability analysis are presented in this chapter as well. The proposed algorithm is validated with loop closing detection and map management using experimental data. The airborne VSLAM is extended then to the multiple UAVs case in Chapter (6). This chapter presents two architectures of cooperation: a Centralised and a Decentralised. The former provides optimal precision in terms of UAV positions and constructed map while the latter is more suitable for real time and embedded system applications. Finally, conclusions and future works are presented in Chapter (7).EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Advanced LIDAR-based techniques for autonomous navigation of spaceborne and airborne platforms

    Get PDF
    The main goal of this PhD thesis is the development and performance assessment of innovative techniques for the autonomous navigation of aerospace platforms by exploiting data acquired by electro-optical sensors. Specifically, the attention is focused on active LIDAR systems since they globally provide a higher degree of autonomy with respect to passive sensors. Two different areas of research are addressed, namely the autonomous relative navigation of multi-satellite systems and the autonomous navigation of Unmanned Aerial Vehicles. The global aim is to provide solutions able to improve estimation accuracy, computational load, and overall robustness and reliability with respect to the techniques available in the literature. In the space field, missions like on-orbit servicing and active debris removal require a chaser satellite to perform autonomous orbital maneuvers in close-proximity of an uncooperative space target. In this context, a complete pose determination architecture is here proposed, which relies exclusively on three-dimensional measurements (point clouds) provided by a LIDAR system as well as on the knowledge of the target geometry. Customized solutions are envisaged at each step of the pose determination process (acquisition, tracking, refinement) to ensure adequate accuracy level while simultaneously limiting the computational load with respect to other approaches available in the literature. Specific strategies are also foreseen to ensure process robustness by autonomously detecting algorithms' failures. Performance analysis is realized by means of a simulation environment which is conceived to realistically reproduce LIDAR operation, target geometry, and multi-satellite relative dynamics in close-proximity. An innovative method to design trajectories for target monitoring, which are reliable for on-orbit servicing and active debris removal applications since they satisfy both safety and observation requirements, is also presented. On the other hand, the problem of localization and mapping of Unmanned Aerial Vehicles is also tackled since it is of utmost importance to provide autonomous safe navigation capabilities in mission scenarios which foresee flights in complex environments, such as GPS denied or challenging. Specifically, original solutions are proposed for the localization and mapping steps based on the integration of LIDAR and inertial data. Also in this case, particular attention is focused on computational load and robustness issues. Algorithms' performance is evaluated through off-line simulations carried out on the basis of experimental data gathered by means of a purposely conceived setup within an indoor test scenario

    Unmanned aerial vehicles (UAVs) for multi-temporal crop surface modelling. A new method for plant height and biomass estimation based on RGB-imaging

    Get PDF
    Data collection with unmanned aerial vehicles (UAVs) fills a gap on the observational scale in re-mote sensing by delivering high spatial and temporal resolution data that is required in crop growth monitoring. The latter is part of precision agriculture that facilitates detection and quan-tification of within-field variability to support agricultural management decisions such as effective fertilizer application. Biophysical parameters such as plant height and biomass are monitored to describe crop growth and serve as an indicator for the final crop yield. Multi-temporal crop surface models (CSMs) provide spatial information on plant height and plant growth. This study aims to examine whether (1) UAV-based CSMs are suitable for plant height modelling, (2) the derived plant height can be used for biomass estimation, and (3) the combination of plant height and vegetation indices has an added value for biomass estimation. To achieve these objectives, UAV-flight campaigns were carried out with a red-green-blue (RGB) camera over controlled field experiments on three study sites, two for summer barley in Western Germany and one for rice in Northeast China. High-resolution, multi-temporal CSMs were derived from the images by using computer vision software following the structure from motion (SfM) approach. The results show that plant height and plant growth can be accurately modelled with UAV-based CSMs from RGB imaging. To maximise the CSMs’ quality, accurate flight planning and well-considered data collection is necessary. Furthermore, biomass is successfully estimated from the derived plant height, with the restriction that results are based on a single-year dataset and thus require further validation. Nevertheless, plant height shows robust estimates in comparison with various vegetation indices. As for biomass estimation in early growth stages additional po-tential is found in exploiting visible band vegetation indices from UAV-based red-green-blue (RGB) imaging. However, the results are limited due to the use of uncalibrated images. Combining visible band vegetation indices and plant height does not significantly improve the performance of the biomass models. This study demonstrates that UAV-based RGB imaging delivers valuable data for productive crop monitoring. The demonstrated results for plant height and biomass estimation open new possi-bilities in precision agriculture by capturing in-field variability

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    Audio-Based Drone Detection and Identification Using Deep Learning Techniques with Dataset Enhancement through Generative Adversarial Networks

    Get PDF
    Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones

    Aerial Drone-based System for Wildfire Monitoring and Suppression

    Full text link
    Wildfire, also known as forest fire or bushfire, being an uncontrolled fire crossing an area of combustible vegetation, has become an inherent natural feature of the landscape in many regions of the world. From local to global scales, wildfire has caused substantial social, economic and environmental consequences. Given the hazardous nature of wildfire, developing automated and safe means to monitor and fight the wildfire is of special interest. Unmanned aerial vehicles (UAVs), equipped with appropriate sensors and fire retardants, are available to remotely monitor and fight the area undergoing wildfires, thus helping fire brigades in mitigating the influence of wildfires. This thesis is dedicated to utilizing UAVs to provide automated surveillance, tracking and fire suppression services on an active wildfire event. Considering the requirement of collecting the latest information of a region prone to wildfires, we presented a strategy to deploy the estimated minimum number of UAVs over the target space with nonuniform importance, such that they can persistently monitor the target space to provide a complete area coverage whilst keeping a desired frequency of visits to areas of interest within a predefined time period. Considering the existence of occlusions on partial segments of the sensed wildfire boundary, we processed both contour and flame surface features of wildfires with a proposed numerical algorithm to quickly estimate the occluded wildfire boundary. To provide real-time situational awareness of the propagated wildfire boundary, according to the prior knowledge of the whole wildfire boundary is available or not, we used the principle of vector field to design a model-based guidance law and a model-free guidance law. The former is derived from the radial basis function approximated wildfire boundary while the later is based on the distance between the UAV and the sensed wildfire boundary. Both vector field based guidance laws can drive the UAV to converge to and patrol along the dynamic wildfire boundary. To effectively mitigate the impacts of wildfires, we analyzed the advancement based activeness of the wildfire boundary with a signal prominence based algorithm, and designed a preferential firefighting strategy to guide the UAV to suppress fires along the highly active segments of the wildfire boundary

    Combining omnidirectional vision with polarization vision for robot navigation

    Get PDF
    La polarisation est le phénomène qui décrit les orientations des oscillations des ondes lumineuses qui sont limitées en direction. La lumière polarisée est largement utilisée dans le règne animal,à partir de la recherche de nourriture, la défense et la communication et la navigation. Le chapitre (1) aborde brièvement certains aspects importants de la polarisation et explique notre problématique de recherche. Nous visons à utiliser un capteur polarimétrique-catadioptrique car il existe de nombreuses applications qui peuvent bénéficier d'une telle combinaison en vision par ordinateur et en robotique, en particulier pour l'estimation d'attitude et les applications de navigation. Le chapitre (2) couvre essentiellement l'état de l'art de l'estimation d'attitude basée sur la vision.Quand la lumière non-polarisée du soleil pénètre dans l'atmosphère, l'air entraine une diffusion de Rayleigh, et la lumière devient partiellement linéairement polarisée. Le chapitre (3) présente les motifs de polarisation de la lumière naturelle et couvre l'état de l'art des méthodes d'acquisition des motifs de polarisation de la lumière naturelle utilisant des capteurs omnidirectionnels (par exemple fisheye et capteurs catadioptriques). Nous expliquons également les caractéristiques de polarisation de la lumière naturelle et donnons une nouvelle dérivation théorique de son angle de polarisation.Notre objectif est d'obtenir une vue omnidirectionnelle à 360 associée aux caractéristiques de polarisation. Pour ce faire, ce travail est basé sur des capteurs catadioptriques qui sont composées de surfaces réfléchissantes et de lentilles. Généralement, la surface réfléchissante est métallique et donc l'état de polarisation de la lumière incidente, qui est le plus souvent partiellement linéairement polarisée, est modifiée pour être polarisée elliptiquement après réflexion. A partir de la mesure de l'état de polarisation de la lumière réfléchie, nous voulons obtenir l'état de polarisation incident. Le chapitre (4) propose une nouvelle méthode pour mesurer les paramètres de polarisation de la lumière en utilisant un capteur catadioptrique. La possibilité de mesurer le vecteur de Stokes du rayon incident est démontré à partir de trois composants du vecteur de Stokes du rayon réfléchi sur les quatre existants.Lorsque les motifs de polarisation incidents sont disponibles, les angles zénithal et azimutal du soleil peuvent être directement estimés à l'aide de ces modèles. Le chapitre (5) traite de l'orientation et de la navigation de robot basées sur la polarisation et différents algorithmes sont proposés pour estimer ces angles dans ce chapitre. A notre connaissance, l'angle zénithal du soleil est pour la première fois estimé dans ce travail à partir des schémas de polarisation incidents. Nous proposons également d'estimer l'orientation d'un véhicule à partir de ces motifs de polarisation.Enfin, le travail est conclu et les possibles perspectives de recherche sont discutées dans le chapitre (6). D'autres exemples de schémas de polarisation de la lumière naturelle, leur calibrage et des applications sont proposées en annexe (B).Notre travail pourrait ouvrir un accès au monde de la vision polarimétrique omnidirectionnelle en plus des approches conventionnelles. Cela inclut l'orientation bio-inspirée des robots, des applications de navigation, ou bien la localisation en plein air pour laquelle les motifs de polarisation de la lumière naturelle associés à l'orientation du soleil à une heure précise peuvent aboutir à la localisation géographique d'un véhiculePolarization is the phenomenon that describes the oscillations orientations of the light waves which are restricted in direction. Polarized light has multiple uses in the animal kingdom ranging from foraging, defense and communication to orientation and navigation. Chapter (1) briefly covers some important aspects of polarization and explains our research problem. We are aiming to use a polarimetric-catadioptric sensor since there are many applications which can benefit from such combination in computer vision and robotics specially robot orientation (attitude estimation) and navigation applications. Chapter (2) mainly covers the state of art of visual based attitude estimation.As the unpolarized sunlight enters the Earth s atmosphere, it is Rayleigh-scattered by air, and it becomes partially linearly polarized. This skylight polarization provides a signi cant clue to understanding the environment. Its state conveys the information for obtaining the sun orientation. Robot navigation, sensor planning, and many other applications may bene t from using this navigation clue. Chapter (3) covers the state of art in capturing the skylight polarization patterns using omnidirectional sensors (e.g fisheye and catadioptric sensors). It also explains the skylight polarization characteristics and gives a new theoretical derivation of the skylight angle of polarization pattern. Our aim is to obtain an omnidirectional 360 view combined with polarization characteristics. Hence, this work is based on catadioptric sensors which are composed of reflective surfaces and lenses. Usually the reflective surface is metallic and hence the incident skylight polarization state, which is mostly partially linearly polarized, is changed to be elliptically polarized after reflection. Given the measured reflected polarization state, we want to obtain the incident polarization state. Chapter (4) proposes a method to measure the light polarization parameters using a catadioptric sensor. The possibility to measure the incident Stokes is proved given three Stokes out of the four reflected Stokes. Once the incident polarization patterns are available, the solar angles can be directly estimated using these patterns. Chapter (5) discusses polarization based robot orientation and navigation and proposes new algorithms to estimate these solar angles where, to the best of our knowledge, the sun zenith angle is firstly estimated in this work given these incident polarization patterns. We also propose to estimate any vehicle orientation given these polarization patterns. Finally the work is concluded and possible future research directions are discussed in chapter (6). More examples of skylight polarization patterns, their calibration, and the proposed applications are given in appendix (B). Our work may pave the way to move from the conventional polarization vision world to the omnidirectional one. It enables bio-inspired robot orientation and navigation applications and possible outdoor localization based on the skylight polarization patterns where given the solar angles at a certain date and instant of time may infer the current vehicle geographical location.DIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools
    corecore