19 research outputs found

    Using control data to determine the reliability of volunteered geographic information about land cover

    Full text link
    There is much interest in using volunteered geographic information (VGI) in formal scientific analyses. This analysis uses VGI describing land cover that was captured using a web-based interface, linked to Google Earth. A number of control points, for which the land cover had been determined by experts allowed measures of the reliability of each volunteer in relation to each land cover class to be calculated. Geographically weighted kernels were used to estimate surfaces of volunteered land cover information accuracy and then to develop spatially distributed correspondences between the volunteer land cover class and land cover from 3 contemporary global datasets (GLC-2000, GlobCover and MODIS v.5). Specifically, a geographically weighted approach calculated local confusion matrices (correspondences) at each location in a central African study area and generated spatial distributions of user's, producer's, portmanteau, and partial portmanteau accuracies. These were used to evaluate the global datasets and to infer which of them was ‘best’ at describing Tree cover at each location in the study area. The resulting maps show where specific global datasets are recommended for analyses requiring Tree cover information. The methods presented in this research suggest that some of the concerns about the quality of VGI can be addressed through careful data collection, the use of control points to evaluate volunteer performance and spatially explicit analyses. A research agenda for the use and analysis of VGI about land cover is outlined

    User’s and producer’s accuracies for the five main land cover types and for different subsets of the data including confidence and expertise.

    No full text
    <p>1 =  Tree cover; 2 =  Shrub cover; 3 =  Herbaceous vegetation/Grassland; 4 =  Cultivated and managed; 5 =  Mosaic of cultivated and managed/natural vegetation.</p

    Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts

    Full text link
    There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and nonexperts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than nonexperts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future
    corecore