19 research outputs found
The regression analysis of predicting the degree of human impact by expert and non-expert groups, when the regression is split into 2 simultaneous models.
<p>The regression analysis of predicting the degree of human impact by expert and non-expert groups, when the regression is split into 2 simultaneous models.</p
User’s and producer’s accuracies for the five main land cover types and for different subsets of the data including confidence and expertise.
<p>1 =  Tree cover; 2 =  Shrub cover; 3 =  Herbaceous vegetation/Grassland; 4 =  Cultivated and managed; 5 =  Mosaic of cultivated and managed/natural vegetation.</p
Median response time of the volunteers.
<p>The response time is in seconds measured from the start of the competition until the end at just over 50 days.</p
Extending the regression to include an indicator of expertise, where <i>b<sub>E</sub></i> is the regression coefficient for this indicator and <i>b<sub>X</sub></i> is the regression coefficient for participant human impact scores.
<p>Extending the regression to include an indicator of expertise, where <i>b<sub>E</sub></i> is the regression coefficient for this indicator and <i>b<sub>X</sub></i> is the regression coefficient for participant human impact scores.</p
Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts
There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and nonexperts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than nonexperts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future
A confusion matrix for the comparison of controls with responses from the crowd.
<p>A confusion matrix for the comparison of controls with responses from the crowd.</p
The relationship between the volunteer responses and the controls for human impact by land cover type.
<p>The lines show the coefficient slopes when each control land cover class is evaluated in turn. Note that the data points have had a small random noise component added to allow their density to be visualised.</p
Consistency of response in choosing the land cover type.
<p>Consistency of response in choosing the land cover type.</p
Regression analysis for the model <i>Y<sub>i</sub> = a+bX<sub>i</sub>+ε<sub>i</sub></i> where <i>X<sub>i</sub></i> is response time and <i>Y<sub>i</sub></i> is human impact.
<p>Regression analysis for the model <i>Y<sub>i</sub> = a+bX<sub>i</sub>+ε<sub>i</sub></i> where <i>X<sub>i</sub></i> is response time and <i>Y<sub>i</sub></i> is human impact.</p