33 research outputs found
Appendix 8 Catfish-user-results
Online Appendix 8. Catfish user scores
Appendix 18 Ver2 final-results-summary
Online Appendix 18. Final results. Details for Figure 3
Data from: Crowds replicate performance of scientific experts scoring phylogenetic matrices of phenotypes
Scientists building the Tree of Life face an overwhelming challenge to categorize phenotypes (e.g., anatomy, physiology) from millions of living and fossil species. This biodiversity challenge far outstrips the capacities of trained scientific experts. Here we explore whether crowdsourcing can be used to collect matrix data on a large scale with the participation of the non-expert students, or âcitizen scientists.â Crowdsourcing, or data collection by non-experts, frequently via the internet, has enabled scientists to tackle some large-scale data collection challenges too massive for individuals or scientific teams alone. The quality of work by non-expert crowds is, however, often questioned and little data has been collected on how such crowds perform on complex tasks such as phylogenetic character coding. We studied a crowd of over 600 non-experts, and found that they could use images to identify anatomical similarity (hypotheses of homology) with an average accuracy of 82% compared to scores provided by experts in the field. This performance pattern held across the Tree of Life, from protists to vertebrates. We introduce a procedure that predicts the difficulty of each character and that can be used to assign harder characters to experts and easier characters to a non-expert crowd for scoring. We test this procedure in a controlled experiment comparing crowd scores to those of experts and show that crowds can produce matrices with over 90% of cells scored correctly while reducing the number of cells to be scored by experts by 50%. Preparation time, including image collection and processing, for a crowdsourcing experiment is significant, and does not currently save time of scientific experts overall. However, if innovations in automation or robotics can reduce such effort, then large-scale implementation of our method could greatly increase the collective scientific knowledge of species phenotypes for phylogenetic tree building. For the field of crowdsourcing, we provide a rare study with ground truth, or an experimental control that many studies lack, and contribute new methods on how to coordinate the work of experts and non-experts. We show that there are important instances in which crowd consensus is not a good proxy for correctness
Appendix 19 Data and R Scripts
Online Appendix 19. R scripts for analysis as a zipped folder
Data from: Crowds replicate performance of scientific experts scoring phylogenetic matrices of phenotypes
Scientists building the Tree of Life face an overwhelming challenge to categorize phenotypes (e.g., anatomy, physiology) from millions of living and fossil species. This biodiversity challenge far outstrips the capacities of trained scientific experts. Here we explore whether crowdsourcing can be used to collect matrix data on a large scale with the participation of the non-expert students, or âcitizen scientists.â Crowdsourcing, or data collection by non-experts, frequently via the internet, has enabled scientists to tackle some large-scale data collection challenges too massive for individuals or scientific teams alone. The quality of work by non-expert crowds is, however, often questioned and little data has been collected on how such crowds perform on complex tasks such as phylogenetic character coding. We studied a crowd of over 600 non-experts, and found that they could use images to identify anatomical similarity (hypotheses of homology) with an average accuracy of 82% compared to scores provided by experts in the field. This performance pattern held across the Tree of Life, from protists to vertebrates. We introduce a procedure that predicts the difficulty of each character and that can be used to assign harder characters to experts and easier characters to a non-expert crowd for scoring. We test this procedure in a controlled experiment comparing crowd scores to those of experts and show that crowds can produce matrices with over 90% of cells scored correctly while reducing the number of cells to be scored by experts by 50%. Preparation time, including image collection and processing, for a crowdsourcing experiment is significant, and does not currently save time of scientific experts overall. However, if innovations in automation or robotics can reduce such effort, then large-scale implementation of our method could greatly increase the collective scientific knowledge of species phenotypes for phylogenetic tree building. For the field of crowdsourcing, we provide a rare study with ground truth, or an experimental control that many studies lack, and contribute new methods on how to coordinate the work of experts and non-experts. We show that there are important instances in which crowd consensus is not a good proxy for correctness
Response to Comment on âThe Placental Mammal Ancestor and the PostâK-Pg Radiation of Placentalsâ
Tree-building with diverse data maximizes explanatory power. Application of molecular clock models to ancient speciation events risks a bias against detection of fast radiations subsequent to the Cretaceous-Paleogene (K-Pg) event. Contrary to Springer et al., postâK-Pg placental diversification does not require âvirus-likeâ substitution rates. Even constraining clade ages to their model, the explosive model best explains placental evolution.Fil: OâLeary, Maureen A.. Stony Brook University; Estados UnidosFil: Bloch, Jonathan I.. University of Florida; Estados UnidosFil: Flynn, John J.. American Museum Of Natural History; Estados UnidosFil: Gaudin, Timothy J.. University of Tennessee; Estados UnidosFil: Giallombardo, Andres. American Museum Of Natural History; Estados UnidosFil: Giannini, Norberto Pedro. American Museum Of Natural History; Estados Unidos. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas; ArgentinaFil: Goldberg, Suzann L.. American Museum Of Natural History; Estados UnidosFil: Kraatz, Brian P.. American Museum Of Natural History; Estados UnidosFil: Luo, Zhe-Xi. University of Chicago; Estados UnidosFil: Meng, Jin. American Museum Of Natural History; Estados UnidosFil: Ni, Xijun. American Museum Of Natural History; Estados UnidosFil: Novacek, Michael J.. American Museum Of Natural History; Estados UnidosFil: Perini, Fernando A.. Universidade Federal do Minas Gerais; BrasilFil: Randall, Zachary. University of Florida; Estados UnidosFil: Rougier, Guillermo Walter. The University Of Louisville; Estados UnidosFil: Sargis, Eric J.. University of Yale; Estados UnidosFil: Silcox, Mary T.. University of Toronto; CanadĂĄFil: Simmons, Nancy B.. American Museum Of Natural History; Estados UnidosFil: Spaulding, Michelle. Carnegie Museum of Natural Histor; Estados UnidosFil: Velazco, PaĂșl M.. American Museum Of Natural History; Estados UnidosFil: Weksler, Marcelo. Universidade Federal do Rio de Janeiro; BrasilFil: Wible, John R.. American Museum Of Natural History; Estados UnidosFil: Cirranello, Andrea L.. American Museum Of Natural History; Estados Unido
The placental mammal ancestor and the post-KPg radiation of placentals
To discover interordinal relationships of living and fossil placental mammals and the time of origin of placentals relative to the Cretaceous-Paleogene (K-Pg) boundary, we scored 4541 phenomic characters de novo for 86 fossil and living species. Combining these data with molecular sequences, we obtained a phylogenetic tree that, when calibrated with fossils, shows that crown clade Placentalia and placental orders originated after the K-Pg boundary. Many nodes discovered using molecular data are upheld, but phenomic signals overturn molecular signals to show Sundatheria (Dermoptera + Scandentia) as the sister taxon of Primates, a close link between Proboscidea (elephants) and Sirenia (sea cows), and the monophyly of echolocating Chiroptera (bats). Our tree suggests that Placentalia first split into Xenarthra and Epitheria; extinct New World species are the oldest members of Afrotheria.Fil: O'Leary, Maureen A.. Stony Brook University; Estados Unidos. American Museum Of Natural History; Estados UnidosFil: Bloch, Jonathan I.. University Of Florida. Florida Museum Of History; Estados UnidosFil: Flynn, John J.. American Museum Of Natural History; Estados UnidosFil: Gaudin, Timothy J.. University Of Tennessee; Estados UnidosFil: Giallombardo, Andres. American Museum Of Natural History; Estados UnidosFil: Giannini, Norberto Pedro. American Museum Of Natural History; Estados UnidosFil: Goldberg, Suzann L.. American Museum Of Natural History; Estados UnidosFil: Kraatz, Brian P.. American Museum Of Natural History; Estados Unidos. Western University of Health Sciences. Department of Anatomy; Estados UnidosFil: Luo, Zhe Xi. Carnegie Museum of Natural History; Estados UnidosFil: Meng, Jin. American Museum Of Natural History; Estados UnidosFil: Ni, Xijun. American Museum Of Natural History; Estados UnidosFil: Novacek, Michael J.. American Museum Of Natural History; Estados UnidosFil: Perini, Fernando A.. American Museum Of Natural History; Estados UnidosFil: Randall, Zachary S.. University Of Florida. Florida Museum Of History; Estados UnidosFil: Rougier, Guillermo W.. The University Of Louisville; Estados UnidosFil: Sargis, Eric J.. University Of Yale; Estados UnidosFil: Silcox, Mary T.. University Of Toronto; CanadĂĄFil: Simmons, Nancy B.. American Museum Of Natural History; Estados UnidosFil: Spaulding, Michelle. American Museum Of Natural History; Estados Unidos. Carnegie Museum of Natural History; Estados UnidosFil: Velazco, PaĂșl M.. American Museum Of Natural History; Estados UnidosFil: Weksler, Marcelo. American Museum Of Natural History; Estados UnidosFil: Wible, John R.. Carnegie Museum of Natural History; Estados UnidosFil: Cirranello, Andrea L.. Stony Brook University; Estados Unidos. American Museum Of Natural History; Estados Unido
Appendix 10 Diatoms-user-results
Online Appendix 10. Diatom user score
Appendix 12 Lilies-user-results
Online Appendix 12. Lilies user scores