The Family Search Indexing project has enabled the manual indexing of millions of records by hundreds of thousands of volunteers making it one of the largest crowdsourcing initiatives in the world. Currently to assure the quality of indexing, each image (e.g., census page) is indexed by two independent indexers and any discrepancies are reviewed by an arbitrator. An alternate, yet untested peer-review indexing process would use only one indexer, one reviewer of their work, and optionally an arbitrator who looks at differences. This method will likely lead to higher efficiency, but its effect on quality is not known. In this paper we analyze historical data that uses the existing A-B-Arbitrate process and describe an experiment that is underway to compare it with the proposed peer review process. The historical data analysis shows that agreement between independent indexers increases as their prior indexing experience increases; agreement is higher in English-speaking languages than foreign languages; and that agreement varies considerably based on field type (e.g., surname, county, gender). Implications of these findings are discussed
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.