1,346 research outputs found

    The third generation of pan-canadian wetland map at 10 m resolution using multisource earth observation data on cloud computing platform

    Get PDF
    Development of the Canadian Wetland Inventory Map (CWIM) has thus far proceeded over two generations, reporting the extent and location of bog, fen, swamp, marsh, and water wetlands across the country with increasing accuracy. Each generation of this training inventory has improved the previous results by including additional reference wetland data and focusing on processing at the scale of ecozone, which represent ecologically distinct regions of Canada. The first and second generations attained relatively highly accurate results with an average approaching 86% though some overestimated wetland extents, particularly of the swamp class. The current research represents a third refinement of the inventory map. It was designed to improve the overall accuracy (OA) and reduce wetlands overestimation by modifying test and train data and integrating additional environmental and remote sensing datasets, including countrywide coverage of L-band ALOS PALSAR-2, SRTM, and Arctic digital elevation model, nighttime light, temperature, and precipitation data. Using a random forest classification within Google Earth Engine, the average OA obtained for the CWIM3 is 90.53%, an improvement of 4.77% over previous results. All ecozones experienced an OA increase of 2% or greater and individual ecozone OA results range between 94% at the highest to 84% at the lowest. Visual inspection of the classification products demonstrates a reduction of wetland area overestimation compared to previous inventory generations. In this study, several classification scenarios were defined to assess the effect of preprocessing and the benefits of incorporating multisource data for large-scale wetland mapping. In addition, the development of a confidence map helps visualize where current results are most and least reliable given the amount of wetland test and train data and the extent of recent landscape disturbance (e.g., fire). The resulting OAs and wetland areal extent reveal the importance of multisource data and adequate test and train data for wetland classification at a countrywide scale

    A regional land use survey based on remote sensing and other data: A report on a LANDSAT and computer mapping project, volume 2

    Get PDF
    The author has identified the following significant results. The project mapped land use/cover classifications from LANDSAT computer compatible tape data and combined those results with other multisource data via computer mapping/compositing techniques to analyze various land use planning/natural resource management problems. Data were analyzed on 1:24,000 scale maps at 1.1 acre resolution. LANDSAT analysis software and linkages with other computer mapping software were developed. Significant results were also achieved in training, communication, and identification of needs for developing the LANDSAT/computer mapping technologies into operational tools for use by decision makers

    Classifying multisensor remote sensing data : Concepts, Algorithms and Applications

    Get PDF
    Today, a large quantity of the Earth’s land surface has been affected by human induced land cover changes. Detailed knowledge of the land cover is elementary for several decision support and monitoring systems. Earth-observation (EO) systems have the potential to frequently provide information on land cover. Thus many land cover classifications are performed based on remotely sensed EO data. In this context, it has been shown that the performance of remote sensing applications is further improved by multisensor data sets, such as combinations of synthetic aperture radar (SAR) and multispectral imagery. The two systems operate in different wavelength domains and therefore provide different yet complementary information on land cover. Considering the increase in revisit times and better spatial resolutions of recent and upcoming systems like TerraSAR-X (11 days; up to1 m), Radarsat-2 (24 days; up to 3 m), or RapidEye constellation (up to 1 day; 5 m), multisensor approaches become even more promising. However, these data sets with high spatial and temporal resolution might become very large and complex. Commonly used statistical pattern recognition methods are usually not appropriate for the classification of multisensor data sets. Hence, one of the greatest challenges in remote sensing might be the development of adequate concepts for classifying multisensor imagery. The presented study aims at an adequate classification of multisensor data sets, including SAR data and multispectral images. Different conventional classifiers and recent developments are used, such as support vector machines (SVM) and random forests (RF), which are well known in the field of machine learning and pattern recognition. Furthermore, the impact of image segmentation on the classification accuracy is investigated and the value of a multilevel concept is discussed. To increase the performance of the algorithms in terms of classification accuracy, the concept of SVM is modified and combined with RF for optimized decision making. The results clearly demonstrate that the use of multisensor imagery is worthwhile. Irrespective of the classification method used, classification accuracies increase by combining SAR and multispectral imagery. Nevertheless, SVM and RF are more adequate for classifying multisensor data sets and significantly outperform conventional classifier algorithms in terms of accuracy. The finally introduced multisensor-multilevel classification strategy, which is based on the sequential use of SVM and RF, outperforms all other approaches. The proposed concept achieves an accuracy of 84.9%. This is significantly higher than all single-source results and also better than those achieved on any other combination of data. Both aspects, i.e. the fusion of SAR and multispectral data as well as the integration of multiple segmentation scales, improve the results. Contrary to the high accuracy value by the proposed concept, the pixel-based classification on single-source data sets achieves a maximal accuracy of 65% (SAR) and 69.8% (multispectral) respectively. The findings and good performance of the presented strategy are underlined by the successful application of the approach to data sets from a second year. Based on the results from this work it can be concluded that the suggested strategy is particularly interesting with regard to recent and future satellite missions

    A method of classification for multisource data in remote sensing based on interval-valued probabilities

    Get PDF
    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method
    corecore