2 research outputs found

    Wide and Deep Neural Networks in Remote Sensing: A Review

    Get PDF
    Wide and deep neural networks in multispectral and hyperspectral image classification are discussed. Wide versus deep networks have always been a topic of intense interest. Deep networks mean large number of layers in the depth direction. Wide networks can be defined as networks growing in the vertical direction. Then, wide and deep networks are networks which have growth in both vertical and horizontal directions. In this report, several directions in order to achieve such networks are described. We first review a methodology called Parallel, Self-Organizing, Hierarchical Neural Networks (PSHNN’s) which have stages growing in the vertical direction, and each stage can be a deep network as well. In turn, each layer of a deep network can be a PSHNN. The second methodology involves making each layer of a deep network wide, and this has been discussed especially with deep residual networks. The third methodology is wide and deep residual neural networks which grow both in horizontal and vertical directions, and include residual learning principles for improving learning. The fourth methodology is wide and deep neural networks in parallel. Here wide and deep networks are two parallel branches, the wide network specializing in memorization while the deep network specializing in generalization. In leading to these methods, we also review various types of PSHNN’s, deep neural networks including convolutional neural networks, autoencoders, and residual learning. Partially due to moderate sizes of current multispectral and hyperspectral image sets, design and implementation of wide and deep neural networks hold the potential to yield most effective solutions. These conclusions are expected to be valid in other areas with similar data structures as well

    Change detection of land use and land cover in an urban region with SPOT-5 images and partial Lanczos extreme learning machine

    Get PDF
    Satellite remote sensing technology and the science associated with evaluation of land use and land cover (LULC) in an urban region makes use of the wide range images and algorithms. Improved land management capacity is critically dependent on real-time or near real-time monitoring of land-use/land cover change (LUCC) to the extent to which solutions to a whole host of urban/rural interface development issues may be well managed promptly. Yet previous processing with LULC methods is often time-consuming, laborious, and tedious making the outputs unavailable within the required time window. This paper presents a new image classification approach based on a novel neural computing technique that is applied to identify the LULC patterns in a fast growing urban region with the aid of 2.5-meter resolution SPOT-5 image products. The classifier was constructed based on the partial Lanczos extreme learning machine (PL-ELM), which is a novel machine learning algorithm with fast learning speed and outstanding generalization performance. Since some different classes of LULC may be linked with similar spectral characteristics, texture features and vegetation indexes were extracted and included during the classification process to enhance the discernability. A validation procedure based on ground truth data and comparisons with some classic classifiers prove the credibility of the proposed PL-ELM classification approach in terms of the classification accuracy as well as the processing speed. A case study in Dalian Development Area (DDA) with the aid of the SPOT-5 satellite images collected in the year of 2003 and 2007 and PL-ELM fully supports the monitoring needs and aids in the rapid change detection with respect to both urban expansion and coastal land reclamations
    corecore