678 research outputs found

    Toward utilizing multitemporal multispectral airborne laser scanning, Sentinel-2, and mobile laser scanning in map updating

    Get PDF
    The rapid development of remote sensing technologies pro-vides interesting possibilities for the further development of nationwide mapping procedures that are currently based mainly on passive aerial images. In particular, we assume that there is a large undiscovered potential in multitemporal airborne laser scanning (ALS) for topographic mapping. In this study, automated change detection from multitemporal multispectral ALS data was tested for the first time. The results showed that direct comparisons between height and intensity data from different dates reveal even small chang-es related to the development of a suburban area. A major challenge in future work is to link the changes with objects that are interesting in map production. In order to effectively utilize multisource remotely sensed data in mapping in the future, we also investigated the potential of satellite images and ground-based data to complement multispectral ALS. A method for continuous change monitoring from a time series of Sentinel-2 satellite images was developed and tested. Finally, a high-density point cloud was acquired with terres-trial mobile laser scanning and automatically classified into four classes. The results were compared with the ALS data, and the possible roles of the different data sources in a fu-ture map updating process were discussed

    Object-based Interpretation Methods for Mapping Built-up Areas

    Get PDF
    Osajulkaisut: Publication 1: Leena Matikainen, Juha Hyyppä, and Marcus E. Engdahl. 2006. Mapping built-up areas from multitemporal interferometric SAR images - A segment-based approach. Photogrammetric Engineering and Remote Sensing, volume 72, number 6, pages 701-714. Publication 2: Leena Matikainen, Juha Hyyppä, and Hannu Hyyppä. 2003. Automatic detection of buildings from laser scanner data for map updating. In: Hans-Gerd Maas, George Vosselman, and Andre Streilein (editors). Proceedings of the ISPRS Working Group III/3 Workshop on 3-D Reconstruction from Airborne Laserscanner and InSAR Data. Dresden, Germany. 8-10 October 2003. International Society for Photogrammetry and Remote Sensing. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, volume 34, part 3/W13, pages 218-224. ISSN 1682-1750. Publication 3: Leena Matikainen, Juha Hyyppä, and Harri Kaartinen. 2009. Comparison between first pulse and last pulse laser scanner data in the automatic detection of buildings. Photogrammetric Engineering and Remote Sensing, volume 75, number 2, pages 133-146. Publication 4: Leena Matikainen. 2006. Improving automation in rule-based interpretation of remotely sensed data by using classification trees. The Photogrammetric Journal of Finland, volume 20, number 1, pages 5-20. Publication 5: Leena Matikainen, Juha Hyyppä, Eero Ahokas, Lauri Markelin, and Harri Kaartinen. 2010. Automatic detection of buildings and changes in buildings for updating of maps. Remote Sensing, volume 2, number 5, pages 1217-1248. Publication 6: Leena Matikainen and Kirsi Karila. 2011. Segment-based land cover mapping of a suburban area - Comparison of high-resolution remotely sensed datasets using classification trees and test field points. Remote Sensing, volume 3, number 8, pages 1777-1804.There is a growing demand for high-quality spatial data and for efficient methods of updating spatial databases. In the present study, automated object-based interpretation methods were developed and tested for coarse land use mapping, detailed land cover and building mapping, and change detection of buildings. Various modern remotely sensed datasets were used in the study. An automatic classification tree method was applied to building detection and land cover classification to automate the development of classification rules. A combination of a permanent land cover classification test field and the classification tree method was suggested and tested to allow rapid analysis and comparison of new datasets. The classification and change detection results were compared with up-to-date map data or reference points to evaluate their quality. The combined use of airborne laser scanner data and digital aerial imagery gave promising results considering topographic mapping. In automated building detection using laser scanner and aerial image data, 96% of all buildings larger than 60 m2 were correctly detected. This accuracy level (96%) is compatible with operational quality requirements. In automated change detection, about 80% of all reference buildings were correctly classified. The overall accuracy of a land cover classification into buildings, trees, vegetated ground and non-vegetated ground using laser scanner and aerial image data was 97% compared with reference points. When aerial image data alone were used, the accuracy was 74%. A comparison between first pulse and last pulse laser scanner data in building detection was also carried out. The comparison showed that the use of last pulse data instead of first pulse data can improve the building detection results. The results yielded by automated interpretation methods could be helpful in the manual updating process of a topographic database. The results could also be used as the basis for further automated processing steps to delineate and reconstruct objects. The synthetic aperture radar (SAR) and optical satellite image data used in the study have their main potential in land cover monitoring applications. The coarse land use classification of a multitemporal interferometric SAR dataset into built-up areas, forests and open areas lead to an overall accuracy of 97% when compared with reference points. This dataset also appeared to be promising for classifying built-up areas into subclasses according to building density. Important topics for further research include more advanced interpretation methods, new and multitemporal datasets, optimal combinations of the datasets, and wider sets of objects and classes. From the practical point of view, work is needed in fitting automated interpretation methods in operational mapping processes and in further testing of the methods.Laadukkaan paikkatiedon tarve kasvaa jatkuvasti, ja paikkatietokantojen ajantasaistukseen tarvitaan tehokkaita menetelmiä. Tässä tutkimuksessa käytettiin useita uudenaikaisia kaukokartoitusaineistoja. Niiden pohjalta kehitettiin ja testattiin automaattisia, objektipohjaisia tulkintamenetelmiä yleispiirteiseen maankäytön luokitteluun, yksityiskohtaiseen maanpeitteen ja rakennusten kartoitukseen sekä rakennusten muutostulkintaan. Rakennusten tulkintaan ja maanpeiteluokitteluun sovellettiin automaattista luokittelupuumenetelmää, jonka avulla voidaan automatisoida luokittelusääntöjen kehittäminen. Uusia aineistoja voidaan analysoida ja vertailla nopeasti, kun luokittelupuumenetelmää käytetään yhdessä pysyvän maanpeiteluokittelutestikentän kanssa. Luokittelu- ja muutostulkintatuloksia verrattiin niiden laadun arvioimiseksi ajantasaiseen kartta-aineistoon tai referenssipisteisiin. Ilmalaserkeilausaineisto ja digitaalinen ilmakuva-aineisto yhdessä antoivat lupaavia tuloksia maastotietojen kartoitusta ajatellen. Automaattisessa rakennusten tulkinnassa 96 % kaikista yli 60 m2:n rakennuksista tunnistettiin oikein. Tämä tarkkuustaso (96 %) vastaa käytännön laatuvaatimuksia. Automaattisessa muutostulkinnassa noin 80 % kaikista referenssirakennuksista luokiteltiin oikein. Maanpeiteluokittelussa neljään luokkaan saavutettiin laserkeilaus- ja ilmakuva-aineistoa käyttäen 97 %:n kokonaistarkkuus referenssipisteisiin verrattuna. Pelkkää ilmakuva-aineistoa käytettäessä tarkkuus oli 74 %. Tutkimuksessa verrattiin myös ensimmäiseen ja viimeiseen paluupulssiin perustuvia laserkeilausaineistoja rakennusten tulkinnassa. Vertailu osoitti, että viimeisen paluupulssin käyttö ensimmäisen sijasta voi parantaa tulkintatuloksia. Automaattisten tulkintamenetelmien tuloksista voisi olla hyötyä maastotietojen manuaalisessa ajantasaistusprosessissa tai lähtötietoina kohteiden automaattisessa rajauksessa ja mallinnuksessa. Tutkimuksessa käytettyjen synteettisen apertuurin tutkan (SAR) tuottamien kuvien ja optisen satelliittikuvan tärkeimmät hyödyntämismahdollisuudet liittyvät maanpeitteen kartoitukseen. Yleispiirteisessä maankäyttöluokittelussa kolmeen luokkaan saavutettiin moniaikaista interferometrista SAR-aineistoa käyttäen 97 %:n kokonaistarkkuus referenssipisteisiin verrattuna. Aineisto osoittautui lupaavaksi myös rakennettujen alueiden jatkoluokitteluun rakennustiheyden perusteella. Jatkotutkimusten kannalta tärkeitä aiheita ovat edistyneemmät tulkintamenetelmät, uudet ja moniaikaiset aineistot, eri aineistojen optimaalinen yhdistäminen sekä useampien kohteiden ja luokkien tarkastelu. Käytännön näkökulmasta työtä tarvitaan automaattisten tulkintamenetelmien sovittamiseksi operatiivisiin kartoitusprosesseihin. Myös menetelmien testausta on jatkettava

    The effect of seasonal variation on automated land cover mapping from multispectral airborne laser scanning data

    Get PDF
    Multispectral airborne laser scanning (MS-ALS) sensors are a new promising source of data for auto-mated mapping methods. Finding an optimal time for data acquisition is important in all mapping applica-tions based on remotely sensed datasets. In this study, three MS-ALS datasets acquired at different times of the growing season were compared for automated land cover mapping and road detection in a suburban area. In addition, changes in the intensity were studied. An object-based random forest classi-fication was carried out using reference points. The overall accuracy of the land cover classification was 93.9% (May dataset), 96.4% (June) and 95.9% (August). The use of the May dataset acquired under leafless conditions resulted in more complete roads than the other datasets acquired when trees were in leaf. It was concluded that all datasets used in the study are applicable for suburban land cover map-ping, however small differences in accuracies between land cover classes exist

    Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods

    Get PDF
    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas

    Object-Based Supervised Machine Learning Regional-Scale Land-Cover Classification Using High Resolution Remotely Sensed Data

    Get PDF
    High spatial resolution (HR) (1m – 5m) remotely sensed data in conjunction with supervised machine learning classification are commonly used to construct land-cover classifications. Despite the increasing availability of HR data, most studies investigating HR remotely sensed data and associated classification methods employ relatively small study areas. This work therefore drew on a 2,609 km2, regional-scale study in northeastern West Virginia, USA, to investigates a number of core aspects of HR land-cover supervised classification using machine learning. Issues explored include training sample selection, cross-validation parameter tuning, the choice of machine learning algorithm, training sample set size, and feature selection. A geographic object-based image analysis (GEOBIA) approach was used. The data comprised National Agricultural Imagery Program (NAIP) orthoimagery and LIDAR-derived rasters. Stratified-statistical-based training sampling methods were found to generate higher classification accuracies than deliberative-based sampling. Subset-based sampling, in which training data is collected from a small geographic subset area within the study site, did not notably decrease the classification accuracy. For the five machine learning algorithms investigated, support vector machines (SVM), random forests (RF), k-nearest neighbors (k-NN), single-layer perceptron neural networks (NEU), and learning vector quantization (LVQ), increasing the size of the training set typically improved the overall accuracy of the classification. However, RF was consistently more accurate than the other four machine learning algorithms, even when trained from a relatively small training sample set. Recursive feature elimination (RFE), which can be used to reduce the dimensionality of a training set, was found to increase the overall accuracy of both SVM and NEU classification, however the improvement in overall accuracy diminished as sample size increased. RFE resulted in only a small improvement the overall accuracy of RF classification, indicating that RF is generally insensitive to the Hughes Phenomenon. Nevertheless, as feature selection is an optional step in the classification process, and can be discarded if it has a negative effect on classification accuracy, it should be investigated as part of best practice for supervised machine land-cover classification using remotely sensed data

    VPRS-based regional decision fusion of CNN and MRF classifications for very fine resolution remotely sensed images

    Get PDF
    Recent advances in computer vision and pattern recognition have demonstrated the superiority of deep neural networks using spatial feature representation, such as convolutional neural networks (CNN), for image classification. However, any classifier, regardless of its model structure (deep or shallow), involves prediction uncertainty when classifying spatially and spectrally complicated very fine spatial resolution (VFSR) imagery. We propose here to characterise the uncertainty distribution of CNN classification and integrate it into a regional decision fusion to increase classification accuracy. Specifically, a variable precision rough set (VPRS) model is proposed to quantify the uncertainty within CNN classifications of VFSR imagery, and partition this uncertainty into positive regions (correct classifications) and non-positive regions (uncertain or incorrect classifications). Those “more correct” areas were trusted by the CNN, whereas the uncertain areas were rectified by a Multi-Layer Perceptron (MLP)-based Markov random field (MLP-MRF) classifier to provide crisp and accurate boundary delineation. The proposed MRF-CNN fusion decision strategy exploited the complementary characteristics of the two classifiers based on VPRS uncertainty description and classification integration. The effectiveness of the MRF-CNN method was tested in both urban and rural areas of southern England as well as Semantic Labelling datasets. The MRF-CNN consistently outperformed the benchmark MLP, SVM, MLP-MRF and CNN and the baseline methods. This research provides a regional decision fusion framework within which to gain the advantages of model-based CNN, while overcoming the problem of losing effective resolution and uncertain prediction at object boundaries, which is especially pertinent for complex VFSR image classification

    The role of earth observation in an integrated deprived area mapping “system” for low-to-middle income countries

    Get PDF
    Urbanization in the global South has been accompanied by the proliferation of vast informal and marginalized urban areas that lack access to essential services and infrastructure. UN-Habitat estimates that close to a billion people currently live in these deprived and informal urban settlements, generally grouped under the term of urban slums. Two major knowledge gaps undermine the efforts to monitor progress towards the corresponding sustainable development goal (i.e., SDG 11—Sustainable Cities and Communities). First, the data available for cities worldwide is patchy and insufficient to differentiate between the diversity of urban areas with respect to their access to essential services and their specific infrastructure needs. Second, existing approaches used to map deprived areas (i.e., aggregated household data, Earth observation (EO), and community-driven data collection) are mostly siloed, and, individually, they often lack transferability and scalability and fail to include the opinions of different interest groups. In particular, EO-based-deprived area mapping approaches are mostly top-down, with very little attention given to ground information and interaction with urban communities and stakeholders. Existing top-down methods should be complemented with bottom-up approaches to produce routinely updated, accurate, and timely deprived area maps. In this review, we first assess the strengths and limitations of existing deprived area mapping methods. We then propose an Integrated Deprived Area Mapping System (IDeAMapS) framework that leverages the strengths of EO- and community-based approaches. The proposed framework offers a way forward to map deprived areas globally, routinely, and with maximum accuracy to support SDG 11 monitoring and the needs of different interest groups

    Object-based image analysis for forest-type mapping in New Hampshire

    Get PDF
    The use of satellite imagery to classify New England forests is inherently complicated due to high species diversity and complex spatial distributions across a landscape. The use of imagery with high spatial resolutions to classify forests has become more commonplace as new satellite technology become available. Pixel-based methods of classification have been traditionally used to identify forest cover types. However, object-based image analysis (OBIA) has been shown to provide more accurate results. This study explored the ability of OBIA to classify forest stands in New Hampshire using two methods: by identifying stands within an IKONOS satellite image, and by identifying individual trees and building them into forest stands. Forest stands were classified in the IKONOS image using OBIA. However, the spatial resolution was not high enough to distinguish individual tree crowns and therefore, individual trees could not be accurately identified to create forest stands. In addition, the accuracy of labeling forest stands using the OBIA approach was low. In the future, these results could be improved by using a modified classification approach and appropriate sampling scheme more reflective of object-based analysis

    Impact of land use change on urban surface temperature and urban green space planning; case study of the island of Bali, Indonesia

    Get PDF
    Land use and surface temperature were monitored from 1995 to 2013 to examine green space development in Bali using Landsat and ASTER imageries. Urban areas were formed by conversion of vegetation and paddy fields. Heat islands with surface temperature of over 29 ºC were found and influenced by urban area types. High priority, low priority and not a priority zones for green space were resulted by weighted overlay of LST, NDVI and urban area types

    Deep learning for land cover and land use classification

    Get PDF
    Recent advances in sensor technologies have witnessed a vast amount of very fine spatial resolution (VFSR) remotely sensed imagery being collected on a daily basis. These VFSR images present fine spatial details that are spectrally and spatially complicated, thus posing huge challenges in automatic land cover (LC) and land use (LU) classification. Deep learning reignited the pursuit of artificial intelligence towards a general purpose machine to be able to perform any human-related tasks in an automated fashion. This is largely driven by the wave of excitement in deep machine learning to model the high-level abstractions through hierarchical feature representations without human-designed features or rules, which demonstrates great potential in identifying and characterising LC and LU patterns from VFSR imagery. In this thesis, a set of novel deep learning methods are developed for LC and LU image classification based on the deep convolutional neural networks (CNN) as an example. Several difficulties, however, are encountered when trying to apply the standard pixel-wise CNN for LC and LU classification using VFSR images, including geometric distortions, boundary uncertainties and huge computational redundancy. These technical challenges for LC classification were solved either using rule-based decision fusion or through uncertainty modelling using rough set theory. For land use, an object-based CNN method was proposed, in which each segmented object (a group of homogeneous pixels) was sampled and predicted by CNN with both within-object and between-object information. LU was, thus, classified with high accuracy and efficiency. Both LC and LU formulate a hierarchical ontology at the same geographical space, and such representations are modelled by their joint distribution, in which LC and LU are classified simultaneously through iteration. These developed deep learning techniques achieved by far the highest classification accuracy for both LC and LU, up to around 90% accuracy, about 5% higher than the existing deep learning methods, and 10% greater than traditional pixel-based and object-based approaches. This research made a significant contribution in LC and LU classification through deep learning based innovations, and has great potential utility in a wide range of geospatial applications
    corecore