74,026 research outputs found

    Effective and efficient midlevel visual elements-oriented land-use classification using VHR remote sensing images

    Get PDF
    Land-use classification using remote sensing images covers a wide range of applications. With more detailed spatial and textural information provided in very high resolution (VHR) remote sensing images, a greater range of objects and spatial patterns can be observed than ever before. This offers us a new opportunity for advancing the performance of land-use classification. In this paper, we first introduce an effective midlevel visual elements-oriented land-use classification method based on “partlets,” which are a library of pretrained part detectors used for midlevel visual elements discovery. Taking advantage of midlevel visual elements rather than low-level image features, a partlets-based method represents images by computing their responses to a large number of part detectors. As the number of part detectors grows, a main obstacle to the broader application of this method is its computational cost. To address this problem, we next propose a novel framework to train coarse-to-fine shared intermediate representations, which are termed “sparselets,” from a large number of pretrained part detectors. This is achieved by building a single-hidden-layer autoencoder and a single-hidden-layer neural network with an L0-norm sparsity constraint, respectively. Comprehensive evaluations on a publicly available 21-class VHR land-use data set and comparisons with state-of-the-art approaches demonstrate the effectiveness and superiority of this paper

    Enhancing spatial resolution of remotely sensed data for mapping freshwater environments

    Get PDF
    Freshwater environments are important for ecosystem services and biodiversity. These environments are subject to many natural and anthropogenic changes, which influence their quality; therefore, regular monitoring is required for their effective management. High biotic heterogeneity, elongated land/water interaction zones, and logistic difficulties with access make field based monitoring on a large scale expensive, inconsistent and often impractical. Remote sensing (RS) is an established mapping tool that overcomes these barriers. However, complex and heterogeneous vegetation and spectral variability due to water make freshwater environments challenging to map using remote sensing technology. Satellite images available for New Zealand were reviewed, in terms of cost, and spectral and spatial resolution. Particularly promising image data sets for freshwater mapping include the QuickBird and SPOT-5. However, for mapping freshwater environments a combination of images is required to obtain high spatial, spectral, radiometric, and temporal resolution. Data fusion (DF) is a framework of data processing tools and algorithms that combines images to improve spectral and spatial qualities. A range of DF techniques were reviewed and tested for performance using panchromatic and multispectral QB images of a semi-aquatic environment, on the southern shores of Lake Taupo, New Zealand. In order to discuss the mechanics of different DF techniques a classification consisting of three groups was used - (i) spatially-centric (ii) spectrally-centric and (iii) hybrid. Subtract resolution merge (SRM) is a hybrid technique and this research demonstrated that for a semi aquatic QuickBird image it out performed Brovey transformation (BT), principal component substitution (PCS), local mean and variance matching (LMVM), and optimised high pass filter addition (OHPFA). However some limitations were identified with SRM, which included the requirement for predetermined band weights, and the over-representation of the spatial edges in the NIR bands due to their high spectral variance. This research developed three modifications to the SRM technique that addressed these limitations. These were tested on QuickBird (QB), SPOT-5, and Vexcel aerial digital images, as well as a scanned coloured aerial photograph. A visual qualitative assessment and a range of spectral and spatial quantitative metrics were used to evaluate these modifications. These included spectral correlation and root mean squared error (RMSE), Sobel filter based spatial edges RMSE, and unsupervised classification. The first modification addressed the issue of predetermined spectral weights and explored two alternative regression methods (Least Absolute Deviation, and Ordinary Least Squares) to derive image-specific band weights for use in SRM. Both methods were found equally effective; however, OLS was preferred as it was more efficient in processing band weights compared to LAD. The second modification used a pixel block averaging function on high resolution panchromatic images to derive spatial edges for data fusion. This eliminated the need for spectral band weights, minimised spectral infidelity, and enabled the fusion of multi-platform data. The third modification addressed the issue of over-represented spatial edges by introducing a sophisticated contrast and luminance index to develop a new normalising function. This improved the spatial representation of the NIR band, which is particularly important for mapping vegetation. A combination of the second and third modification of SRM was effective in simultaneously minimising the overall spectral infidelity and undesired spatial errors for the NIR band of the fused image. This new method has been labelled Contrast and Luminance Normalised (CLN) data fusion, and has been demonstrated to make a significant contribution in fusing multi-platform, multi-sensor, multi-resolution, and multi-temporal data. This contributes to improvements in the classification and monitoring of fresh water environments using remote sensing

    Mapping and classification of ecologically sensitive marine habitats using unmanned aerial vehicle (UAV) imagery and object-based image analysis (OBIA)

    Get PDF
    Nowadays, emerging technologies, such as long-range transmitters, increasingly miniaturized components for positioning, and enhanced imaging sensors, have led to an upsurge in the availability of new ecological applications for remote sensing based on unmanned aerial vehicles (UAVs), sometimes referred to as “drones”. In fact, structure-from-motion (SfM) photogrammetry coupled with imagery acquired by UAVs offers a rapid and inexpensive tool to produce high-resolution orthomosaics, giving ecologists a new way for responsive, timely, and cost-effective monitoring of ecological processes. Here, we adopted a lightweight quadcopter as an aerial survey tool and object-based image analysis (OBIA) workflow to demonstrate the strength of such methods in producing very high spatial resolution maps of sensitive marine habitats. Therefore, three different coastal environments were mapped using the autonomous flight capability of a lightweight UAV equipped with a fully stabilized consumer-grade RGB digital camera. In particular we investigated a Posidonia oceanica seagrass meadow, a rocky coast with nurseries for juvenile fish, and two sandy areas showing biogenic reefs of Sabelleria alveolata. We adopted, for the first time, UAV-based raster thematic maps of these key coastal habitats, produced after OBIA classification, as a new method for fine-scale, low-cost, and time saving characterization of sensitive marine environments which may lead to a more effective and efficient monitoring and management of natural resource

    A high-resolution index for vegetation extraction in IKONOS images

    Get PDF
    ISBN: 978-0-8194-8341-6 - WOSInternational audienceIn monitoring vegetation change and urban planning, the measure and the mapping of the green vegetation over the Earth play an important role. The normalized difference vegetation index (NDVI) is the most popular approach to generate vegetation maps for remote sensing imagery. Unfortunately, the NDVI generates low resolution vegetation maps. Highresolution imagery, such as IKONOS imagery, can be used to overcome this weakness leading to better classification accuracy. Hence, it is important to derive a vegetation index providing the high-resolution data. Various scientific researchers have proposed methods based on high-resolution vegetation indices. These methods use image fusion to generate high-resolution vegetation maps. IKONOS produces high-resolution panchromatic (Pan) images and low-resolution multispectral (MS) images. Generally, for the image fusion purpose, the conventional linear interpolation bicubic scheme is used to resize the low-resolution images. This scheme fails around edges and consequently produces blurred edges and annoying artefacts in interpolated images. This study presents a new index that provides high-resolution vegetation maps for IKONOS imagery. This vegetation index (HRNDVI: High Resolution NDVI) is based on a new derived formula including the high-resolution information. We use an artefact free image interpolation method to upsample the MS images so that they have the same size as that of the Pan images. The HRNDVI is then computed by using the resampled MS and the Pan images. The proposed vegetation index takes the advantage of the high spatial resolution information of Pan images to generate artefact free vegetation maps. Visual analysis demonstrates that this index is promising and performs well in vegetation extraction and visualisation

    Spatial and Topological Analysis of Urban Land Cover Structure in New Orleans Using Multispectral Aerial Image and Lidar Data

    Get PDF
    Urban land use and land cover (LULC) mapping has been one of the major applications in remote sensing of the urban environment. Land cover refers to the biophysical materials at the surface of the earth (i.e. grass, trees, soils, concrete, water), while land use indicates the socio-economic function of the land (i.e., residential, industrial, commercial land uses). This study addresses the technical issue of how to computationally infer urban land use types based on the urban land cover structures from remote sensing data. In this research, a multispectral aerial image and high-resolution LiDAR topographic data have been integrated to investigate the urban land cover and land use in New Orleans, Louisiana. First, the LiDAR data are used to solve the problems associated with solar shadows of trees and buildings, building lean and occlusions in the multispectral aerial image. A two-stage rule-based classification approach has been developed, and the urban land cover of New Orleans has been classified into six categories: water, grass, trees, imperious ground, elevated bridges, and buildings with an overall classification accuracy of 94.2%, significantly higher than that of traditional per-pixel based classification method. The buildings are further classified into regular low-rising, multi-story, mid-rise, high-rise, and skyscrapers in terms of the height. Second, the land cover composition and structure in New Orleans have been quantitatively analyzed for the first time in terms of urban planning districts, and the information and knowledge about the characteristics of urban land cover components and structure for different types of land use functions have been discovered. Third, a graph-theoretic data model, known as relational attribute neighborhood graph (RANG), is adopted to comprehensively represent geometrical and thematic attributes, compositional and structural properties, spatial/topological relations between urban land cover patches (objects). Based on the evaluation of the importance of 26 spatial, thematic and topological variables in RANG, the random forest classification method is utilized to computationally infer and classify the urban land use in New Orleans into 7 types at the urban block level: single-family residential, two-family residential, multi-family residential, commercial, CBD, institutional, parks and open space, with an overall accuracy of 91.7%

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
    corecore