36 research outputs found

    Classification accuracy increase using multisensor data fusion

    Get PDF
    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc

    Compressive Sensing for PAN-Sharpening

    Get PDF
    Based on compressive sensing framework and sparse reconstruction technology, a new pan-sharpening method, named Sparse Fusion of Images (SparseFI, pronounced as sparsify), is proposed in [1]. In this paper, the proposed SparseFI algorithm is validated using UltraCam and WorldView-2 data. Visual and statistic analysis show superior performance of SparseFI compared to the existing conventional pan-sharpening methods in general, i.e. rich in spatial information and less spectral distortion. Moreover, popular quality assessment metrics are employed to explore the dependency on regularization parameters and evaluate the efficiency of various sparse reconstruction toolboxes

    Alphabet-based Multisensory Data Fusion and Classification using Factor Graphs

    Get PDF
    The way of multisensory data integration is a crucial step of any data fusion method. Different physical types of sensors (optic, thermal, acoustic, or radar) with different resolutions, and different types of GIS digital data (elevation, vector map) require a proper method for data integration. Incommensurability of the data may not allow to use conventional statistical methods for fusion and processing of the data. A correct and established way of multisensory data integration is required to deal with such incommensurable data as the employment of an inappropriate methodology may lead to errors in the fusion process. To perform a proper multisensory data fusion several strategies were developed (Bayesian, linear (log linear) opinion pool, neural networks, fuzzy logic approaches). Employment of these approaches is motivated by weighted consensus theory, which lead to fusion processes that are correctly performed for the variety of data properties

    Haze compensation and atmospheric correction for Sentinel-2 data

    Get PDF
    Sentinel-2 data bring the opportunity to analyze landcover at a high spatial accuracy together with a wide swath. Nevertheless, the high data volume requires a per granule analysis. This may lead to border effect (difference in the radiance/reflectance value) between the neighboring granules during atmospheric correction. If there is a high variation of the aerosol optical thickness (AOT) across the granules, especially in case of haze, the atmospherically corrected mosaicked products often show granule border effects. To overcome this artifact a dehazing prior the atmospheric correction is performed. The dehazing compensates only for the haze thickness keeping the AOT fraction for further estimation and compensation in the atmospheric correction chain. This approach results in a smoother AOT map estimate and a corresponding bottom of atmosphere (BOA) reflectance with no border artifact. Digital elevation model (DEM) is employed allowing a better labeling of haze and a higher accuracy of the dehazing. The DEM analysis rejects high elevation areas where bright surfaces might erroneously be classified as haze, thus reducing the probability of misclassification. An example of a numeric evaluation of the atmospheric correction products (AOT and BOA reflectance) is given. It demonstrates a smooth transition between the granules in the AOT map leading to the proper estimate of the BOA reflectance data. The dehazing and atmospheric correction are implemented in the DLR's ATCOR software

    Atmospheric Correction in Sentinel-2 Simplified Level 2 Product Prototype Processor: Technical Aspects of Design and Implementation

    Get PDF
    This paper presents the scientific and technical aspects of the Level 2A (atmospheric/topographic correction) for the Sentinel-2 Simplified Level 2 Product Prototype Processor (S2SL2PPP). Design aspects are partly fixed by the ESA as main customer. Together with the alternative atmospheric correction system MACCS, the developed chain based on ATCOR is used for the estimation of the following products: Atmosphere type, Bottom of atmosphere reflectance (including cirrus detection and correction), Aerosol optical thickness, and Water vapor. Being a mono-temporal correction chain ATCOR requires a selection of the spectral bands for the estimation of Aerosol type, Aerosol optical thickness based on the dense dark vegetation method and Water vapor based on the atmospherically pre-corrected differential absorption method as well as an estimation of the best parameter set for these methods. The parameter set was estimated by a sensitivity analysis on a simulated top and bottom of atmosphere radiance/reflectance data based on radiative transfer simulations. The aerosol type is estimated by the comparison of the path radiances ratio to the ground truth path radiances ratio for the standard atmospheres, namely rural, urban, maritime, and desert. Aerosol optical thickness map and Water vapor map are initially estimated on the 20m pixel size data, then the maps are interpolated to the pixel size of 10m and the 10m reflectance data are estimated. The cirrus cloud map is created by the cirrus 1.38 µm band thresholding to the thin, medium, thick cirrus and cirrus clouds. Cirrus compensation is performed by correlating the cirrus band reflectance to the reflective region bands and subtraction of the cirrus contribution per band. Validation of the chain is performed given the top of atmosphere data (as input) and bottom of atmosphere products (the reference). Estimated reflectance is assessed given the ground truth reflectance, Aerosol optical thickness is validated given the AERONET measurements, cirrus correction is validated using a pair of Landsat-8 scenes acquired for the same area with a small time difference. One scene is contaminated by cirrus cloud that has to be restored, while the other is cirrus free and used as reference. A comparison of the estimated products is also performed with an alternative atmospheric correction chain – FLAASH. The software is developed using the Interactive Data Language (IDL) and python. This paper presents the scientific and technical aspects of the Level 2A (atmospheric/topographic correction) for the Sentinel-2 Simplified Level 2 Product Prototype Processor (S2SL2PPP). Design aspects are partly fixed by the ESA as main customer. Together with the alternative atmospheric correction system MACCS, the developed chain based on ATCOR is used for the estimation of the following products: Atmosphere type, Bottom of atmosphere reflectance (including cirrus detection and correction), Aerosol optical thickness, and Water vapor. Being a mono-temporal correction chain ATCOR requires a selection of the spectral bands for the estimation of Aerosol type, Aerosol optical thickness based on the dense dark vegetation method and Water vapor based on the atmospherically pre-corrected differential absorption method as well as an estimation of the best parameter set for these methods. The parameter set was estimated by a sensitivity analysis on a simulated top and bottom of atmosphere radiance/reflectance data based on radiative transfer simulations. The aerosol type is estimated by the comparison of the path radiances ratio to the ground truth path radiances ratio for the standard atmospheres, namely rural, urban, maritime, and desert. Aerosol optical thickness map and Water vapor map are initially estimated on the 20m pixel size data, then the maps are interpolated to the pixel size of 10m and the 10m reflectance data are estimated. The cirrus cloud map is created by the cirrus 1.38 µm band thresholding to the thin, medium, thick cirrus and cirrus clouds. Cirrus compensation is performed by correlating the cirrus band reflectance to the reflective region bands and subtraction of the cirrus contribution per band. Validation of the chain is performed given the top of atmosphere data (as input) and bottom of atmosphere products (the reference). Estimated reflectance is assessed given the ground truth reflectance, Aerosol optical thickness is validated given the AERONET measurements, cirrus correction is validated using a pair of Landsat-8 scenes acquired for the same area with a small time difference. One scene is contaminated by cirrus cloud that has to be restored, while the other is cirrus free and used as reference. A comparison of the estimated products is also performed with an alternative atmospheric correction chain – FLAASH. The software is developed using the Interactive Data Language (IDL) and python

    Classification accuracy increase using multisensor data fusion

    Get PDF
    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc

    Discrete Graphical Models for Alphabet-Based Multisensory Data Fusion and Classification

    Get PDF
    The way of multisensory data integration is a crucial step of any data fusion method. Different physical types of sensors (optic, thermal, acoustic, radar, etc.), different resolution, and different types of GIS digital data (elevation, vector maps, etc.) require a proper method for data integration. Incommensurability of the data may not allow to use conventional statistical methods for fusion and processing of the data. Correct and established way of multisensory data integration is required to deal with such incommensurable data, while employment of an inappropriate methodology may lead to errors in the fusion. To perform a proper multisensory data fusion several methods were developed (weighted Bayesian, linear (log linear) opinion pool, neural networks, fuzzy logic approaches, etc.). Employment of these approaches is motivated by weighted consensus theory, leading the fusion of incommensurable data to be performed in a correct way. In this paper data fusion is proposed to perform using a finite predefined domain – alphabet. Feature extraction (data fission) is performed separately on different sources of data. Extracted features are processed to be represented on the predefined domain (alphabet). Alternative method such as factor graph (discrete graphical model) is employed for data and feature aggregation. The nature of factor graphs in application on data coded on a finite domain allows us to obtain an improvement in accuracy of real data fusion and classification for multispectral high resolution WorldView-2, TerraSAR-X SpotLight, and elevation model

    Information extraction using optical and radar remote sensing data fusion

    Get PDF
    Information extraction from multi-sensor remote sensing imagery is an important and challenging task for many applications such as urban area mapping and change detection. Especially for optical and radar data fusion a special acquisition (orthogonal) geometry is of great importance in order to minimize displacements due to an inaccuracy of the Digital Elevation Model (DEM) used for data ortho-rectification and due to the presence of unknown 3D structures in a scene. Final data spatial alignment is performed manually using ground control points (GCPs) or by a recently proposed automatic co-registration method based on a Mutual Information measure. These data preprocessing steps are of a crucial importance for a success of the following data fusion. For a combination of features originating from different sources, which are quite often non-commensurable, we propose an information fusion framework called INFOFUSE consisting of three main processing steps: feature fission (feature extraction for complete description of a scene), unsupervised clustering (complexity reduction and feature conversion to a common domain) and supervised classification realized by Bayesian/Neural/Graphical networks. Finally, a general data processing chain for multi-sensor data fusion is presented. Examples of buildings in an urban area are presented for very high resolution space borne optical WorldView-2 and radar TerraSAR-X imagery over Munich city, Germany in different acquisition geometries including the orthogonal one. Additionally, theoretical analysis of radar signatures of buildings in urban area and its impact on the joint classification or data fusion is discussed

    Multi-sensor data fusion for urban area classification

    Get PDF
    Nowadays many sensors for information acquisition are widely employed in remote sensing and different properties of the objects can be revealed. Unfortunately each imaging sensor has its own limits on scene recognition in the sense of thematic, temporal, and other interpretation. Integration (fusion) of different data types is expected to increase the quality of scene interpretation and decision making. In recent time integration of synthetic aperture radar (SAR), optical, topography or geographic information system data is widely performed for many tasks such as automatic classification, mapping or interpretation. In this paper we present an approach for very high resolution multi-sensor data fusion to solve several tasks such as urban area automatic classification and change detection. Datasets with different nature are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), dimensionality reduction, and supervised classification. Fusion of WorldView-2 optical data and laser Digital Surface Model (DSM) data allows for different types of urban objects to be classified into predefined classes of interest with increased accuracy. Numerical evaluation of the method comparing with other established methods illustrates advantage in the accuracy of structure classification into low-, medium-, and high-rise buildings together with other common urban classes
    corecore