6 research outputs found

    Land use classification using deep multitask networks

    Get PDF

    End-to-end predictive models for remote sensing applications

    Get PDF

    FuseNet: End-to-end multispectral VHR image fusion and classification

    Get PDF
    Classification of very high resolution (VHR) satellite images faces two major challenges: 1) inherent low intra-class and high inter-class spectral similarities and 2) mismatching resolution of available bands. Conventional methods have addressed these challenges by adopting separate stages of image fusion and spatial feature extraction steps. These steps, however, are not jointly optimizing the classification task at hand. We propose a single-stage framework embedding these processing stages in a multiresolution convolutional network. The network, called FuseNet, aims to match the resolution of the panchromatic and multispectral bands in a VHR image using convolutional layers with corresponding downsampling and upsampling operations. We compared FuseNet against the use of separate processing steps for image fusion, such as pansharpening and resampling through interpolation. We also analyzed the sensitivity of the classification performance of FuseNet to a selected number of its hyperparameters. Results show that FuseNet surpasses conventional methods

    Predicting wildfire burns from big geodata using deep learning

    Get PDF
    Wildfire continues to be a major environmental problem in the world. To help land and fire management agencies manage and mitigate wildfire-related risks, we need to develop tools for mapping those risks. Big geodata—in the form of remotely sensed images, ground-based sensor observations, and topographical datasets—can help us characterize the dynamics of wildfire related events. In this study, we design a deep fully convolutional network, called AllConvNet, to produce daily maps of the probability of a wildfire burn over the next 7 days. We applied it to burns in Victoria, Australia for the period of 2006–2017. Fifteen factors that were extracted from six different datasets and resulted into 29 quantitative features, were selected as input to the network. We compared it with three baseline methods: SegNet, multilayer perceptron, and logistic regression. AllConvNet outperforms the other three baseline methods in four of the six quantitative metrics considered. AllConvNet and SegNet provide smoother and more regularized predicted maps, with SegNet providing greater sensitivity in dificriminating less wildfire-prone locations. Input feature statistical importance was measured for all the networks and compared against logistic regression coefficients. Total precipitation, lightning flash density, and land surface temperature occur to be consistently highly weighted by all models while terrain aspect components, wind direction components, certain land cover classes (such as crop field and woodland), and distance from power lines are ranked on the lower end. We conclude that wild-fire burn prediction methods based on deep learning present quantitative and qualitative gains

    Towards automated delineation of smallholder farm fields from VHR images using convolutional networks

    Get PDF
    Automated delineation of smallholder farm fields is difficult because of their small size, irregular shape and the use of mixed-cropping systems. Edges between smallholder plots are often indistinct in satellite imagery and contours have to be identified by considering the transition of the complex textural patterns of the fields. We introduce a strategy to delineate field boundaries using a fully convolutional network in combination with a globalization and grouping algorithm to produce a hierarchical segmentation of the fields. We carry out an experimental analysis in a study area in Kofa, Nigeria, using a WorldView-3 image, comparing several state-of-the-art contour detection algorithms. The proposed strategy outperforms state-of-the-art computer vision methods and shows promising results by automatically delineating field boundaries with an accuracy close to human level photo-interpretation
    corecore