8 research outputs found

    Object-Based Greenhouse Classification from GeoEye-1 and WorldView-2 Stereo Imagery

    Get PDF
    Remote sensing technologies have been commonly used to perform greenhouse detection and mapping. In this research, stereo pairs acquired by very high-resolution optical satellites GeoEye-1 (GE1) and WorldView-2 (WV2) have been utilized to carry out the land cover classification of an agricultural area through an object-based image analysis approach, paying special attention to greenhouses extraction. The main novelty of this work lies in the joint use of single-source stereo-photogrammetrically derived heights and multispectral information from both panchromatic and pan-sharpened orthoimages. The main features tested in this research can be grouped into different categories, such as basic spectral information, elevation data (normalized digital surface model; nDSM), band indexes and ratios, texture and shape geometry. Furthermore, spectral information was based on both single orthoimages and multiangle orthoimages. The overall accuracy attained by applying nearest neighbor and support vector machine classifiers to the four multispectral bands of GE1 were very similar to those computed from WV2, for either four or eight multispectral bands. Height data, in the form of nDSM, were the most important feature for greenhouse classification. The best overall accuracy values were close to 90%, and they were not improved by using multiangle orthoimages

    A PERCEPTRON-BASED FEATURE SELECTION APPROACH FOR DECISION TREE CLASSIFICATION

    Get PDF
    The use of OBIA for high spatial resolution image classification can be divided in two main steps, the first being segmentation and the second regarding the labeling of the objects in accordance with a particular set of features and a classifier. Decision trees are often used to represent human knowledge in the latter. The issue falls in how to select a smaller amount of features from a feature space with spatial, spectral and textural variables to describe the classes of interest, which engenders the matter of choosing the best or more convenient feature selection (FS) method. In this work, an approach for FS within a decision tree was introduced using a single perceptron and the Backpropagation algorithm. Three alternatives were compared: single, double and multiple inputs, using a sequential backward search (SBS). Test regions were used to evaluate the efficiency of the proposed methods. Results showed that it is possible to use a single perceptron in each node, with an overall accuracy (OA) between 77.6% and 77.9%. Only SBS reached an OA larger than 88%. Thus, the quality of the proposed solution depends on the number of input features

    Using vector agents to implement an unsupervised image classification algorithm

    Get PDF
    Unsupervised image classification methods conventionally use the spatial information of pixels to reduce the effect of speckled noise in the classified map. To extract this spatial information, they employ a predefined geometry, i.e., a fixed-size window or segmentation map. However, this coding of geometry lacks the necessary complexity to accurately reflect the spatial connectivity within objects in a scene. Additionally, there is no unique mathematical formula to determine the shape and scale applied to the geometry, being parameters that are usually estimated by expert users. In this paper, a novel geometry-led approach using Vector Agents (VAs) is proposed to address the above drawbacks in unsupervised classification algorithms. Our proposed method has two primary steps: (1) creating reliable training samples and (2) constructing the VA model. In the first step, the method applies the statistical information of a classified image by k-means to select a set of reliable training samples. Then, in the second step, the VAs are trained and constructed to classify the image. The model is tested for classification on three high spatial resolution images. The results show the enhanced capability of the VA model to reduce noise in images that have complex features, e.g., streets, buildings. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    Object oriented image analysis based on multi-agent recognition system

    No full text
    In this paper, the capabilities of multi-agent systems are used in order to solve object recognition difficulties in complex urban areas based on the characteristics of WorldView-2 satellite imagery and digital surface model (DSM). The proposed methodology has three main steps: pre-processing of dataset, object based image analysis and multi-agent object recognition. Classified regions obtained from object based image analysis are used as input datasets in the proposed multi-agent system in order to modify/improve results. In the first operational level of the proposed multi-agent system, various kinds of object recognition agents modify initial classified regions based on their spectral, textural and 3D structural knowledge. Then, in the second operational level, 2D structural knowledge and contextual relations are used by agents for reasoning and modification. Evaluation of the capabilities of the proposed object recognition methodology is performed on WorldView-2 imagery over Rio de Janeiro (Brazil) which has been collected in January 2010. According to the obtained results of the object based image analysis process, contextual relations and structural descriptors have high potential to modify general difficulties of object recognition. Using knowledge based reasoning and cooperative capabilities of agents in the proposed multi-agent system in this paper, most of the remaining difficulties are decreased and the accuracy of object based image analysis results is improved for about three percent

    Geographic Vector Agents from Pixels to Intelligent Processing Units

    Get PDF
    Spatial modelling methods usually utilise pixels and image objects as the fundamental processing unit to address real-world objects (geo-objects) in image space. To do this, both pixel-based and object-based approaches typically employ a linear two-staged workflow of segmentation and classification. Pixel-based methods often segment a classified image to address geo-objects in image space. In contrast, object-based approaches classify a segmented image to determine geo-objects. These methods lack the ability to simultaneously integrate the geometry and theme of geo-objects in image space. This thesis explores Vector Agents (VA) as an automated and intelligent processing unit to directly address real-world objects in the image space. A VA, is an object that can represent (non)dynamic and (ir)regular vector boundaries (Moore, 2011; Hammam et al., 2007). This aim is achieved by modelling geometry, state, and temporal changes of geo-objects in spatial space. To reach this aim, we first defined and formulated the main components of the VA, including geometry, state and neighbourhood, and their respective rules in accordance with the properties of raster datasets (e.g. satellite images), as a representation of a geographical space (the Earth). The geometry of the VA was formulated according to a directional planar graph that includes a set of spatial reasoning relationships and geometric operators, in order to implement a set of dynamic geometric behaviours, such as growing, joining or splitting. Transition rules were defined by using a classifier (e.g. Support Vector Machines (SVMs)), a set of image analysis operators (e.g. edge detection, median filter), and the characteristics of the objects in real world. VAs used the transition rules in order to find and update their states in image space. The proximity between VAs was explicitly formulated according to the minimum distance between VAs in image space. These components were then used to model the main elements of our software agent (e.g. geo-objects), namely sensors, effectors, states, rules and strategies. These elements allow a VA to perceive its environment, change its geometry and interact with other VAs to evolve inconsistency together with their thematic meaning. It also enables VAs to adjust their thematic meaning based on changes in their own attributes and those of their neighbours. We then tested this concept by using the VA to extract geo-objects from different types of raster datasets (e.g. multispectral and hyperspectral images). The results of the VA model confirmed that: (a) The VA is flexible enough to integrate thematic and geometric components of geo-objects in order to extract them directly from image space, and (b) The VA has sufficient capability to be applied in different areas of image analysis. We discuss the limitations of this work and present the possible solutions in the last chapter

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF
    corecore