34 research outputs found

    Super-Resolution for Overhead Imagery Using DenseNets and Adversarial Learning

    Full text link
    Recent advances in Generative Adversarial Learning allow for new modalities of image super-resolution by learning low to high resolution mappings. In this paper we present our work using Generative Adversarial Networks (GANs) with applications to overhead and satellite imagery. We have experimented with several state-of-the-art architectures. We propose a GAN-based architecture using densely connected convolutional neural networks (DenseNets) to be able to super-resolve overhead imagery with a factor of up to 8x. We have also investigated resolution limits of these networks. We report results on several publicly available datasets, including SpaceNet data and IARPA Multi-View Stereo Challenge, and compare performance with other state-of-the-art architectures.Comment: 9 pages, 9 figures, WACV 2018 submissio

    Change Detection of Marine Environments Using Machine Learning

    Get PDF
    NPS NRP Technical ReportChange Detection of Marine Environments Using Machine LearningHQMC Intelligence Department (I)This research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.

    Weakly-Supervised Semantic Segmentation of Ships Using Thermal Imagery

    Full text link
    The United States coastline spans 95,471 miles; a distance that cannot be effectively patrolled or secured by manual human effort alone. Unmanned Aerial Vehicles (UAVs) equipped with infrared cameras and deep-learning based algorithms represent a more efficient alternative for identifying and segmenting objects of interest - namely, ships. However, standard approaches to training these algorithms require large-scale datasets of densely labeled infrared maritime images. Such datasets are not publicly available and manually annotating every pixel in a large-scale dataset would have an extreme labor cost. In this work we demonstrate that, in the context of segmenting ships in infrared imagery, weakly-supervising an algorithm with sparsely labeled data can drastically reduce data labeling costs with minimal impact on system performance. We apply weakly-supervised learning to an unlabeled dataset of 7055 infrared images sourced from the Naval Air Warfare Center Aircraft Division (NAWCAD). We find that by sparsely labeling only 32 points per image, weakly-supervised segmentation models can still effectively detect and segment ships, with a Jaccard score of up to 0.756

    Automatic Identification and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review

    Get PDF
    Disease diagnosis is one of the major tasks for increasing food production in agriculture. Although precision agriculture (PA) takes less time and provides a more precise application of agricultural activities, the detection of disease using an Unmanned Aerial System (UAS) is a challenging task. Several Unmanned Aerial Vehicles (UAVs) and sensors have been used for this purpose. The UAVs’ platforms and their peripherals have their own limitations in accurately diagnosing plant diseases. Several types of image processing software are available for vignetting and orthorectification. The training and validation of datasets are important characteristics of data analysis. Currently, different algorithms and architectures of machine learning models are used to classify and detect plant diseases. These models help in image segmentation and feature extractions to interpret results. Researchers also use the values of vegetative indices, such as Normalized Difference Vegetative Index (NDVI), Crop Water Stress Index (CWSI), etc., acquired from different multispectral and hyperspectral sensors to fit into the statistical models to deliver results. There are still various drifts in the automatic detection of plant diseases as imaging sensors are limited by their own spectral bandwidth, resolution, background noise of the image, etc. The future of crop health monitoring using UAVs should include a gimble consisting of multiple sensors, large datasets for training and validation, the development of site-specific irradiance systems, and so on. This review briefly highlights the advantages of automatic detection of plant diseases to the growers

    Review on Active and Passive Remote Sensing Techniques for Road Extraction

    Get PDF
    Digital maps of road networks are a vital part of digital cities and intelligent transportation. In this paper, we provide a comprehensive review on road extraction based on various remote sensing data sources, including high-resolution images, hyperspectral images, synthetic aperture radar images, and light detection and ranging. This review is divided into three parts. Part 1 provides an overview of the existing data acquisition techniques for road extraction, including data acquisition methods, typical sensors, application status, and prospects. Part 2 underlines the main road extraction methods based on four data sources. In this section, road extraction methods based on different data sources are described and analysed in detail. Part 3 presents the combined application of multisource data for road extraction. Evidently, different data acquisition techniques have unique advantages, and the combination of multiple sources can improve the accuracy of road extraction. The main aim of this review is to provide a comprehensive reference for research on existing road extraction technologies.Peer reviewe

    Land cover classification from remote sensing images based on multi-scale fully convolutional network

    Get PDF
    Although the Convolutional Neural Network (CNN) has shown great potential for land cover classification, the frequently used single-scale convolution kernel limits the scope of information extraction. Therefore, we propose a Multi-Scale Fully Convolutional Network (MSFCN) with a multi-scale convolutional kernel as well as a Channel Attention Block (CAB) and a Global Pooling Module (GPM) in this paper to exploit discriminative representations from two-dimensional (2D) satellite images. Meanwhile, to explore the ability of the proposed MSFCN for spatio-temporal images, we expand our MSFCN to three-dimension using three-dimensional (3D) CNN, capable of harnessing each land cover category’s time series interaction from the reshaped spatio-temporal remote sensing images. To verify the effectiveness of the proposed MSFCN, we conduct experiments on two spatial datasets and two spatio-temporal datasets. The proposed MSFCN achieves 60.366% on the WHDLD dataset and 75.127% on the GID dataset in terms of mIoU index while the figures for two spatio-temporal datasets are 87.753% and 77.156%. Extensive comparative experiments and ablation studies demonstrate the effectiveness of the proposed MSFCN. Code will be available at https://github.com/lironui/MSFCN

    SOARNET, Deep Learning Thermal Detection For Free Flight

    Get PDF
    Thermals are regions of rising hot air formed on the ground through the warming of the surface by the sun. Thermals are commonly used by birds and glider pilots to extend flight duration, increase cross-country distance, and conserve energy. This kind of powerless flight using natural sources of lift is called soaring. Once a thermal is encountered, the pilot flies in circles to keep within the thermal, so gaining altitude before flying off to the next thermal and towards the destination. A single thermal can net a pilot thousands of feet of elevation gain, however estimating thermal locations is not an easy task. Pilots look for different indicators: color variation on the ground because the difference in the amount of heat absorbed by the ground varies based on the color/composition, birds circling in an area gaining lift, and certain types of cloud formations (cumulus clouds). The above methods are not always reliable enough and pilots study the weather for thermals by estimating solar heating of the ground using cloud cover and time of year and the lapse rate and dew point of the troposphere. In this paper, we present a Machine Learning based solution for assisting in forecasting thermals. We created a custom dataset using flight data recorded and uploaded to public databases by soaring pilots. We determine where and when the pilot encountered thermals to pull weather and satellite images corresponding to the location and time of the flight. Using this dataset we train an algorithm to automatically predict the location of thermals given as input the current weather conditions and terrain information obtained from Google Earth Engine and thermal regions encountered as truth labels. We were able to converge very well on the training and validation set, proving our method with around a 0.98 F1 score. These results indicate success in creating a custom dataset and a powerful neural network with the necessity of bolstering our custom dataset

    Drone imagery and deep learning for mapping the density of wild Pacific oysters to manage their expansion into protected areas

    Get PDF
    The recent expansion of wild Pacific oysters already had negative repercussions on sites in Europe and has raised further concerns over their potential harmful impact on the balance of biomes within protected areas. Monitoring their colonisation, especially at early stages, has become an urgent ecological issue. Current efforts to monitor wild Pacific oysters rely on “walk-over” surveys that are highly laborious and often limited to specific areas of easy access. Remotely Piloted Aircraft Systems (RPAS), commonly known as drones, can provide an effective tool for surveying complex terrains and detect Pacific oysters. This study provides a novel workflow for automated detection, counting and mapping of individual Pacific oysters to estimate their density per square meter by using Convolutional Neural Networks (CNNs) applied to drone imagery. Drone photos were collected at low tides and altitudes of approximately 10 m across a variety of cases of rocky shore and mudflats scenarios. Using object detection, we compared how different Convolutional Neural Networks (CNNs) architectures including YOLOv5s, YOLOv5m, TPH-YOLOv5 and FR-CNN performed in the detection of Pacific oysters over the surveyed areas. We report the precision of our model at 88% with a difference in performance of 1% across the two sites. The workflow presented in this work proposes the use of grid maps to visualize the density of Pacific oysters per square meter towards ecological management and the creation of time series to identify trends

    Deep Learning for Building Footprint Generation from Optical Imagery

    Get PDF
    Auf Deep Learning basierende Methoden haben vielversprechende Ergebnisse für die Aufgabe der Erstellung von Gebäudegrundrissen gezeigt, aber sie haben zwei inhärente Einschränkungen. Erstens zeigen die extrahierten Gebäude verschwommene Gebäudegrenzen und Klecksformen. Zweitens sind für das Netzwerktraining massive Annotationen auf Pixelebene erforderlich. Diese Dissertation hat eine Reihe von Methoden entwickelt, um die oben genannten Probleme anzugehen. Darüber hinaus werden die entwickelten Methoden in praktische Anwendungen umgesetzt
    corecore