71 research outputs found

    A Routine and Post-disaster Road Corridor Monitoring Framework for the Increased Resilience of Road Infrastructures

    Get PDF

    Deep learning based style transfer for low altitude aerial imagery

    Get PDF
    Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset

    A Review on Deep Learning in UAV Remote Sensing

    Full text link
    Deep Neural Networks (DNNs) learn representation from data with an impressive capability, and brought important breakthroughs for processing images, time-series, natural language, audio, video, and many others. In the remote sensing field, surveys and literature revisions specifically involving DNNs algorithms' applications have been conducted in an attempt to summarize the amount of information produced in its subfields. Recently, Unmanned Aerial Vehicles (UAV) based applications have dominated aerial sensing research. However, a literature revision that combines both "deep learning" and "UAV remote sensing" thematics has not yet been conducted. The motivation for our work was to present a comprehensive review of the fundamentals of Deep Learning (DL) applied in UAV-based imagery. We focused mainly on describing classification and regression techniques used in recent applications with UAV-acquired data. For that, a total of 232 papers published in international scientific journal databases was examined. We gathered the published material and evaluated their characteristics regarding application, sensor, and technique used. We relate how DL presents promising results and has the potential for processing tasks associated with UAV-based image data. Lastly, we project future perspectives, commentating on prominent DL paths to be explored in the UAV remote sensing field. Our revision consists of a friendly-approach to introduce, commentate, and summarize the state-of-the-art in UAV-based image applications with DNNs algorithms in diverse subfields of remote sensing, grouping it in the environmental, urban, and agricultural contexts.Comment: 38 pages, 10 figure

    AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network

    Get PDF
    This work is licensed under a Creative Commons Attribution 4.0 International License.Significant resources have been spent in collecting and storing large and heterogeneous radar datasets during expensive Arctic and Antarctic fieldwork. The vast majority of data available is unlabeled, and the labeling process is both time-consuming and expensive. One possible alternative to the labeling process is the use of synthetically generated data with artificial intelligence. Instead of labeling real images, we can generate synthetic data based on arbitrary labels. In this way, training data can be quickly augmented with additional images. In this research, we evaluated the performance of synthetically generated radar images based on modified cycle-consistent adversarial networks. We conducted several experiments to test the quality of the generated radar imagery. We also tested the quality of a state-of-the-art contour detection algorithm on synthetic data and different combinations of real and synthetic data. Our experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. However, the synthetic images generated by GANs cannot be used solely for training a neural network (training on synthetic and testing on real) as they cannot simulate all of the radar characteristics such as noise or Doppler effects. To the best of our knowledge, this is the first work in creating radar sounder imagery based on generative adversarial network

    Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey

    Full text link
    The ongoing amalgamation of UAV and ML techniques is creating a significant synergy and empowering UAVs with unprecedented intelligence and autonomy. This survey aims to provide a timely and comprehensive overview of ML techniques used in UAV operations and communications and identify the potential growth areas and research gaps. We emphasise the four key components of UAV operations and communications to which ML can significantly contribute, namely, perception and feature extraction, feature interpretation and regeneration, trajectory and mission planning, and aerodynamic control and operation. We classify the latest popular ML tools based on their applications to the four components and conduct gap analyses. This survey also takes a step forward by pointing out significant challenges in the upcoming realm of ML-aided automated UAV operations and communications. It is revealed that different ML techniques dominate the applications to the four key modules of UAV operations and communications. While there is an increasing trend of cross-module designs, little effort has been devoted to an end-to-end ML framework, from perception and feature extraction to aerodynamic control and operation. It is also unveiled that the reliability and trust of ML in UAV operations and applications require significant attention before full automation of UAVs and potential cooperation between UAVs and humans come to fruition.Comment: 36 pages, 304 references, 19 Figure

    A Novel GAN-Based Anomaly Detection and Localization Method for Aerial Video Surveillance at Low Altitude

    Get PDF
    The last two decades have seen an incessant growth in the use of Unmanned Aerial Vehicles (UAVs) equipped with HD cameras for developing aerial vision-based systems to support civilian and military tasks, including land monitoring, change detection, and object classification. To perform most of these tasks, the artificial intelligence algorithms usually need to know, a priori, what to look for, identify. or recognize. Actually, in most operational scenarios, such as war zones or post-disaster situations, areas and objects of interest are not decidable a priori since their shape and visual features may have been altered by events or even intentionally disguised (e.g., improvised explosive devices (IEDs)). For these reasons, in recent years, more and more research groups are investigating the design of original anomaly detection methods, which, in short, are focused on detecting samples that differ from the others in terms of visual appearance and occurrences with respect to a given environment. In this paper, we present a novel two-branch Generative Adversarial Network (GAN)-based method for low-altitude RGB aerial video surveillance to detect and localize anomalies. We have chosen to focus on the low-altitude sequences as we are interested in complex operational scenarios where even a small object or device can represent a reason for danger or attention. The proposed model was tested on the UAV Mosaicking and Change Detection (UMCD) dataset, a one-of-a-kind collection of challenging videos whose sequences were acquired between 6 and 15 m above sea level on three types of ground (i.e., urban, dirt, and countryside). Results demonstrated the effectiveness of the model in terms of Area Under the Receiving Operating Curve (AUROC) and Structural Similarity Index (SSIM), achieving an average of 97.2% and 95.7%, respectively, thus suggesting that the system can be deployed in real-world applications

    Flood Detection Using Multi-Modal and Multi-Temporal Images: A Comparative Study

    Get PDF
    Natural disasters such as flooding can severely affect human life and property. To provide rescue through an emergency response team, we need an accurate flooding assessment of the affected area after the event. Traditionally, it requires a lot of human resources to obtain an accurate estimation of a flooded area. In this paper, we compared several traditional machine-learning approaches for flood detection including multi-layer perceptron (MLP), support vector machine (SVM), deep convolutional neural network (DCNN) with recent domain adaptation-based approaches, based on a multi-modal and multi-temporal image dataset. Specifically, we used SPOT-5 and RADAR images from the flood event that occurred in November 2000 in Gloucester, UK. Experimental results show that the domain adaptation-based approach, semi-supervised domain adaptation (SSDA) with 20 labeled data samples, achieved slightly better values of the area under the precision-recall (PR) curve (AUC) of 0.9173 and F1 score of 0.8846 than those by traditional machine approaches. However, SSDA required much less labor for ground-truth labeling and should be recommended in practice

    Structural Building Damage Detection with Deep Learning: Assessment of a State-of-the-Art CNN in Operational Conditions

    Get PDF
    Remotely sensed data can provide the basis for timely and efficient building damage maps that are of fundamental importance to support the response activities following disaster events. However, the generation of these maps continues to be mainly based on the manual extraction of relevant information in operational frameworks. Considering the identification of visible structural damages caused by earthquakes and explosions, several recent works have shown that Convolutional Neural Networks (CNN) outperform traditional methods. However, the limited availability of publicly available image datasets depicting structural disaster damages, and the wide variety of sensors and spatial resolution used for these acquisitions (from space, aerial and UAV platforms), have limited the clarity of how these networks can effectively serve First Responder needs and emergency mapping service requirements. In this paper, an advanced CNN for visible structural damage detection is tested to shed some light on what deep learning networks can currently deliver, and its adoption in realistic operational conditions after earthquakes and explosions is critically discussed. The heterogeneous and large datasets collected by the authors covering different locations, spatial resolutions and platforms were used to assess the network performances in terms of transfer learning with specific regard to geographical transferability of the trained network to imagery acquired in different locations. The computational time needed to deliver these maps is also assessed. Results show that quality metrics are influenced by the composition of training samples used in the network. To promote their wider use, three pre-trained networks—optimized for satellite, airborne and UAV image spatial resolutions and viewing angles—are made freely available to the scientific community

    Application of Artificial Intelligence Approaches in the Flood Management Process for Assessing Blockage at Cross-Drainage Hydraulic Structures

    Get PDF
    Floods are the most recurrent, widespread and damaging natural disasters, and are ex-pected to become further devastating because of global warming. Blockage of cross-drainage hydraulic structures (e.g., culverts, bridges) by flood-borne debris is an influen-tial factor which usually results in reducing hydraulic capacity, diverting the flows, dam-aging structures and downstream scouring. Australia is among the countries adversely impacted by blockage issues (e.g., 1998 floods in Wollongong, 2007 floods in Newcas-tle). In this context, Wollongong City Council (WCC), under the Australian Rainfall and Runoff (ARR), investigated the impact of blockage on floods and proposed guidelines to consider blockage in the design process for the first time. However, existing WCC guide-lines are based on various assumptions (i.e., visual inspections as representative of hy-draulic behaviour, post-flood blockage as representative of peak floods, blockage remains constant during the whole flooding event), that are not supported by scientific research while also being criticised by hydraulic design engineers. This suggests the need to per-form detailed investigations of blockage from both visual and hydraulic perspectives, in order to develop quantifiable relationships and incorporate blockage into design guide-lines of hydraulic structures. However, because of the complex nature of blockage as a process and the lack of blockage-related data from actual floods, conventional numerical modelling-based approaches have not achieved much success. The research in this thesis applies artificial intelligence (AI) approaches to assess the blockage at cross-drainage hydraulic structures, motivated by recent success achieved by AI in addressing complex real-world problems (e.g., scour depth estimation and flood inundation monitoring). The research has been carried out in three phases: (a) litera-ture review, (b) hydraulic blockage assessment, and (c) visual blockage assessment. The first phase investigates the use of computer vision in the flood management domain and provides context for blockage. The second phase investigates hydraulic blockage using lab scale experiments and the implementation of multiple machine learning approaches on datasets collected from lab experiments (i.e., Hydraulics-Lab Dataset (HD), Visual Hydraulics-Lab Dataset (VHD)). The artificial neural network (ANN) and end-to-end deep learning approaches reported top performers among the implemented approaches and demonstrated the potential of learning-based approaches in addressing blockage is-sues. The third phase assesses visual blockage at culverts using deep learning classifi-cation, detection and segmentation approaches for two types of visual assessments (i.e., blockage status classification, percentage visual blockage estimation). Firstly, a range of existing convolutional neural network (CNN) image classification models are imple-mented and compared using visual datasets (i.e., Images of Culvert Openings and Block-age (ICOB), VHD, Synthetic Images of Culverts (SIC)), with the aim to automate the process of manual visual blockage classification of culverts. The Neural Architecture Search Network (NASNet) model achieved best classification results among those im-plemented. Furthermore, the study identified background noise and simplified labelling criteria as two contributing factors in degraded performance of existing CNN models for blockage classification. To address the background clutter issue, a detection-classification pipeline is proposed and achieved improved visual blockage classification performance. The proposed pipeline has been deployed using edge computing hardware for blockage monitoring of actual culverts. The role of synthetic data (i.e., SIC) on the performance of culvert opening detection is also investigated. Secondly, an automated segmentation-classification deep learning pipeline is proposed to estimate the percentage of visual blockage at circular culverts to better prioritise culvert maintenance. The AI solutions proposed in this thesis are integrated into a blockage assessment framework, designed to be deployed through edge computing to monitor, record and assess blockage at cross-drainage hydraulic structures
    corecore