34 research outputs found

    DynamicEarthNet: Daily Multi-Spectral Satellite Dataset for Semantic Change Segmentation

    Get PDF
    Earth observation is a fundamental tool for monitoring the evolution of land use in specific areas of interest. Observing and precisely defining change, in this context, requires both time-series data and pixel-wise segmentations. To that end, we propose the DynamicEarthNet dataset that consists of daily, multi-spectral satellite observations of 75 selected areas of interest distributed over the globe with imagery from Planet Labs. These observations are paired with pixel-wise monthly semantic segmentation labels of 7 land use and land cover (LULC) classes. DynamicEarthNet is the first dataset that provides this unique combination of daily measurements and high-quality labels. In our experiments, we compare several established baselines that either utilize the daily observations as additional training data (semi-supervised learning) or multiple observations at once (spatio-temporal learning) as a point of reference for future research. Finally, we propose a new evaluation metric SCS that addresses the specific challenges associated with time-series semantic change segmentation. The data is available at: https://mediatum.ub.tum.de/1650201

    A Self-Supervised Decision Fusion Framework for Building Detection

    No full text
    In this study, a new building detection framework for monocular satellite images, called self-supervised decision fusion (SSDF) is proposed. The model is based on the idea of self-supervision, which aims to generate training data automatically from each individual test image, without human interaction. This approach allows us to use the advantages of the supervised classifiers in a fully automated framework. We combine our previous supervised and unsupervised building detection frameworks to suggest a self-supervised learning architecture. Hence, we borrow the major strength of the unsupervised approach to obtain one of the most important clues, the relation of a building, and its cast shadow. This important information is, then, used in order to satisfy the requirement of training sample selection. Finally, an ensemble learning algorithm, called fuzzy stacked generalization (FSG), fuses a set of supervised classifiers trained on the automatically generated dataset with various shape, color, and texture features. We assessed the building detection performance of the proposed approach over 19 test sites and compare our results with the state of the art algorithms. Our experiments show that the supervised building detection method requires more than 30% of the ground truth (GT) training data to reach the performance of the proposed SSDF method. Furthermore, the SSDF method increases the F-score by 2 percentage points (p.p.) on the average compared to performance of the unsupervised method

    A Self-Supervised Decision Fusion Framework for Building Detection

    No full text
    In this study, a new building detection framework for monocular satellite images, called self-supervised decision fusion (SSDF) is proposed. The model is based on the idea of self-supervision, which aims to generate training data automatically from each individual test image, without human interaction. This approach allows us to use the advantages of the supervised classifiers in a fully automated framework. We combine our previous supervised and unsupervised building detection frameworks to suggest a self-supervised learning architecture. Hence, we borrow the major strength of the unsupervised approach to obtain one of the most important clues, the relation of a building, and its cast shadow. This important information is, then, used in order to satisfy the requirement of training sample selection. Finally, an ensemble learning algorithm, called fuzzy stacked generalization (FSG), fuses a set of supervised classifiers trained on the automatically generated dataset with various shape, color, and texture features. We assessed the building detection performance of the proposed approach over 19 test sites and compare our results with the state of the art algorithms. Our experiments show that the supervised building detection method requires more than 30% of the ground truth (GT) training data to reach the performance of the proposed SSDF method. Furthermore, the SSDF method increases the F-score by 2 percentage points (p.p.) on the average compared to performance of the unsupervised method

    Automated Detection of Arbitrarily Shaped Buildings in Complex Environments From Monocular VHR Optical Satellite Imagery

    No full text
    This paper introduces a new approach for the automated detection of buildings from monocular very high resolution (VHR) optical satellite images. First, we investigate the shadow evidence to focus on building regions. To do that, we propose a new fuzzy landscape generation approach to model the directional spatial relationship between buildings and their shadows. Once all landscapes are collected, a pruning process is developed to eliminate the landscapes that may occur due to non-building objects. The final building regions are detected by GrabCut partitioning approach. In this paper, the input requirements of the GrabCut partitioning are automatically extracted from the previously determined shadow and landscape regions, so that the approach gained an efficient fully automated behavior for the detection of buildings. Extensive experiments performed on 20 test sites selected from a set of QuickBird and Geoeye-1 VHR images showed that the proposed approach accurately detects buildings with arbitrary shapes and sizes in complex environments. The tests also revealed that even under challenging environmental and illumination conditions, reasonable building detection performances could be achieved by the proposed approach

    Building Detection With Decision Fusion

    No full text
    A novel decision fusion approach to building detection problem in VHR optical satellite images is proposed. The method combines the detection results of multiple classifiers under a hierarchical architecture, called Fuzzy Stacked Generalization (FSG). After an initial segmentation and pre-processing step, a large variety of color, texture and shape features are extracted from each segment. Then, the segments, represented in different feature spaces are classified by different base-layer classifiers of the FSG architecture. The class membership values of the segments, which represent the decisions of different base-layer classifiers in a decision space, are aggregated to form a fusion space which is then fed to a meta-layer classifier of the FSG to label the vectors in the fusion space. The paper presents the performance results of the proposed decision fusion model by a comparison with the state of the art machine learning algorithms. The results show that fusing the decisions of multiple classifiers improves the performance, when they are ensembled under the suggested hierarchical learning architecture

    Building Detection With Decision Fusion

    No full text
    A novel decision fusion approach to building detection problem in VHR optical satellite images is proposed. The method combines the detection results of multiple classifiers under a hierarchical architecture, called Fuzzy Stacked Generalization (FSG). After an initial segmentation and pre-processing step, a large variety of color, texture and shape features are extracted from each segment. Then, the segments, represented in different feature spaces are classified by different base-layer classifiers of the FSG architecture. The class membership values of the segments, which represent the decisions of different base-layer classifiers in a decision space, are aggregated to form a fusion space which is then fed to a meta-layer classifier of the FSG to label the vectors in the fusion space. The paper presents the performance results of the proposed decision fusion model by a comparison with the state of the art machine learning algorithms. The results show that fusing the decisions of multiple classifiers improves the performance, when they are ensembled under the suggested hierarchical learning architecture

    Segmentation Fusion for Building Detection Using Domain-Specific Information

    No full text
    Segment-based classification is one of the popular approaches for object detection, where the performance of the classification task is sensitive to the accuracy of the output of the initial segmentation. Majority of the object detection systems directly use one of the generic segmentation algorithms, such as mean shift or k-means. However, depending on the problem domain, the properties of the regions such as size, color, texture, and shape, which are suitable for classification, may vary. Besides, fine tuning the segmentation parameters for a set of regions may not provide a globally acceptable solution in remote sensing domain, since the characteristic properties of a class in different regions may change due to the cultural and environmental factors. In this study, we propose a domain-specific segmentation method for building detection, which integrates information related to the building detection problem into the detection system during the segmentation step. Buildings in a remotely sensed image are distinguished from the highly cluttered background, mostly, by their rectangular shapes, roofing material and associated shadows. The proposed method fuses the information extracted from a set of unsupervised segmentation outputs together with this a priori information about the building object, called domain-specific information (DSI), during the segmentation process. Finally, the segmentation output is provided to a two-layer decision fusion algorithm for building detection. The advantage of domain-specific segmentation over the state-of-the-art methods is observed both quantitatively by measuring the segmentation and detection performances and qualitatively by visual inspection

    AUTOMATIC BUILDING DETECTION WITH FEATURE SPACE FUSION USING ENSEMBLE LEARNING

    No full text
    This paper proposes a novel approach to building detection problem in satellite images. The proposed method employs a two layer hierarchical classification mechanism for ensemble learning. After an initial segmentation, each segment is classified by N different classifiers using different features at the first layer. The class membership values of the segments, which are obtained from different base layer classifiers, are ensembled to form a new fusion space, which forms a linearly separable simplex. Then, this simplex is partitioned by a linear classifier at the meta layer. The paper presents the performance results of the proposed model and comparisons with the state of the art classifiers

    Segmentation and decision fusion for building detection

    No full text
    Segment based classification is one of the popular approaches for object detection, where the performance of the classification task is sensitive to the accuracy of the output of the initial segmentation. Most of these studies includes generic segmentation methods and it is assumed that the segmentation output is compatible with the subsequent classification method. However, depending on the problem domain the properties of the regions such as size, shape etc. which are suitable for classification may vary. In this study, we propose a domain specific segmentation method for building detection. The contribution of the domain specific segmentation is empirically analyzed. For this purpose, first the decision fusion method is employed for building detection on the outputs of the state of the art segmentation methods, then it is employed on the output of the domain specific segmentation method and the classification performances for each method are compared. The advantage of domain specific segmentation is observed quantitatively and satisfactory results are obtaine
    corecore