6 research outputs found

    Comparing Three Spaceborne Optical Sensors via Fine Scale Pixel-based Urban Land Cover Classification Products

    Get PDF
    Accessibility to higher resolution earth observation satellites suggests an improvement in the potential for fine scale image classification. In this comparative study, imagery from three optical satellites (WorldView-2, Pleiades and RapidEye) were used to extract primary land cover classesfrom a pixel-based classification principle in a suburban area. Following a systematic working procedure, manual segmentation and vegetation indices were applied to generate smaller subsets to in turn develop sets of ISODATA unsupervised classification maps. With the focus on the land cover classification differences detected between the sensors at spectral level, the validation of accuracies and their relevance for fine scale classification in the built-up environment domain were examined. If an overview of an urban area is required, RapidEye will provide an above average (0.69 k) result with the built-up class sufficiently extracted. The higher resolution sensors such as WorldView-2 and Pleiades in comparison delivered finer scale accuracy at pixel and parcel level with high correlation and accuracy levels (0.65-0.71k) achieved from these two independent classifications

    Comparing three spaceborne optical sensors via fine scale pixel based urban land cover classification products

    Get PDF
    Accessibility to higher resolution earth observation satellites suggests an improvement in the potential for fine scale image classification. In this comparative study, imagery from three optical satellites (WorldView-2, Pléiades and RapidEye) were used to extract primary land cover classes from a pixel-based classification principle in a suburban area. Following a systematic working procedure, manual segmentation and vegetation indices were applied to generate smaller subsets to in turn develop sets of ISODATA unsupervised classification maps. With the focus on the land cover classification differences detected between the sensors at spectral level, the validation of accuracies and their relevance for fine scale classification in the built-up environment domain were examined. If an overview of an urban area is required, RapidEye will provide an above average (0.69 κ) result with the built-up class sufficiently extracted. The higher resolution sensors such as WorldView-2 and Pléiades in comparison delivered finer scale accuracy at pixel and parcel level with high correlation and accuracy levels (0.65-0.71 κ) achieved from these two independent classifications.http://www.sajg.org.zaam201

    Operational Performance of an Automatic Preliminary Spectral Rule-Based Decision-Tree Classifier of Spaceborne Very High Resolution Optical Images

    No full text
    In the last 20 years, the number of spaceborne very high resolution (VHR) optical imaging sensors and the use of satellite VHR optical images have continued to increase both in terms of quantity and quality of data. This has driven the need for automating quantitative analysis of spaceborne VHR optical imagery. Unfortunately, existing remote sensing image understanding systems (RS-IUSs) score poorly in operational contexts. In recent years, to overcome operational drawbacks of existing RS-IUSs, an original two-stage stratified hierarchical RS-IUS architecture has been proposed by Shackelford and Davis. More recently, an operational automatic pixel-based near-real- time four-band IKONOS-like spectral rule-based decision-tree classifier (ISRC) has been downscaled from an original seven-band Landsat-like SRC (LSRC). The following is true for ISRC: 1) It is suitable for mapping spaceborne VHR optical imagery ra- diometrically calibrated into top-of-atmosphere or surface re- flectance values, and 2) it is eligible for use as the pixel-based preliminary classification first stage of a Shackelford and Davis two-stage stratified hierarchical RS-IUS architecture. Given the ISRC ¿full degree¿ of automation, which cannot be surpassed, and ISRC computation time, which is near real time, this paper provides a quantitative assessment of ISRC accuracy and robust- ness to changes in the input data set consisting of 14 multisource spaceborne images of agricultural landscapes selected across the European Union. The collected experimental results show that, first, in a dichotomous vegetation/nonvegetation classification of four synthesized VHR images at regional scale, ISRC, in com- parison with LSRC, provides a vegetation detection accuracy ranging from 76% to 97%, rising to about 99% if pixels fea- turing a low leaf area index are not considered in the com- parison. Second, in the generation of a binary vegetation mask from ten panchromatic-sharpened QuickBird-2 and IKONOS-2 images, the operational performance measurement of ISRC is superior to that of an ordinary normalized difference vegetation index thresholding technique. Finally, the second-stage automatic stratified texture-based separation of low-texture annual cropland or herbaceous range land (land cover class AC/HR) from high- texture forest or woodland (land cover class F/W) is performed in the discrete, finite, and symbolic ISRC map domain in place of the ordinary continuous varying, subsymbolic, and multichannel texture feature domain. To conclude, this paper demonstrates that the automatic ISRC is eligible for use in operational VHR satellite-based measurement systems such as those envisaged un- der the ongoing Global Earth Observation System of Systems (GEOSS) and Global Monitoring for the Environment and Secu- rity (GMES) international programs.JRC.H.4-Monitoring Agricultural Resource

    Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.

    Get PDF
    By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”. Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product. Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations
    corecore