38 research outputs found

    Arp2/3 complex interactions and actin network turnover in lamellipodia

    Get PDF
    Cell migration is initiated by lamellipodia—membrane-enclosed sheets of cytoplasm containing densely packed actin filament networks. Although the molecular details of network turnover remain obscure, recent work points towards key roles in filament nucleation for Arp2/3 complex and its activator WAVE complex. Here, we combine fluorescence recovery after photobleaching (FRAP) of different lamellipodial components with a new method of data analysis to shed light on the dynamics of actin assembly/disassembly. We show that Arp2/3 complex is incorporated into the network exclusively at the lamellipodium tip, like actin, at sites coincident with WAVE complex accumulation. Capping protein likewise showed a turnover similar to actin and Arp2/3 complex, but was confined to the tip. In contrast, cortactin—another prominent Arp2/3 complex regulator—and ADF/cofilin—previously implicated in driving both filament nucleation and disassembly—were rapidly exchanged throughout the lamellipodium. These results suggest that Arp2/3- and WAVE complex-driven actin filament nucleation at the lamellipodium tip is uncoupled from the activities of both cortactin and cofilin. Network turnover is additionally regulated by the spatially segregated activities of capping protein at the tip and cofilin throughout the mesh

    Data from: Crowds replicate performance of scientific experts scoring phylogenetic matrices of phenotypes

    No full text
    Scientists building the Tree of Life face an overwhelming challenge to categorize phenotypes (e.g., anatomy, physiology) from millions of living and fossil species. This biodiversity challenge far outstrips the capacities of trained scientific experts. Here we explore whether crowdsourcing can be used to collect matrix data on a large scale with the participation of the non-expert students, or “citizen scientists.” Crowdsourcing, or data collection by non-experts, frequently via the internet, has enabled scientists to tackle some large-scale data collection challenges too massive for individuals or scientific teams alone. The quality of work by non-expert crowds is, however, often questioned and little data has been collected on how such crowds perform on complex tasks such as phylogenetic character coding. We studied a crowd of over 600 non-experts, and found that they could use images to identify anatomical similarity (hypotheses of homology) with an average accuracy of 82% compared to scores provided by experts in the field. This performance pattern held across the Tree of Life, from protists to vertebrates. We introduce a procedure that predicts the difficulty of each character and that can be used to assign harder characters to experts and easier characters to a non-expert crowd for scoring. We test this procedure in a controlled experiment comparing crowd scores to those of experts and show that crowds can produce matrices with over 90% of cells scored correctly while reducing the number of cells to be scored by experts by 50%. Preparation time, including image collection and processing, for a crowdsourcing experiment is significant, and does not currently save time of scientific experts overall. However, if innovations in automation or robotics can reduce such effort, then large-scale implementation of our method could greatly increase the collective scientific knowledge of species phenotypes for phylogenetic tree building. For the field of crowdsourcing, we provide a rare study with ground truth, or an experimental control that many studies lack, and contribute new methods on how to coordinate the work of experts and non-experts. We show that there are important instances in which crowd consensus is not a good proxy for correctness

    Data from: Crowds replicate performance of scientific experts scoring phylogenetic matrices of phenotypes

    No full text
    Scientists building the Tree of Life face an overwhelming challenge to categorize phenotypes (e.g., anatomy, physiology) from millions of living and fossil species. This biodiversity challenge far outstrips the capacities of trained scientific experts. Here we explore whether crowdsourcing can be used to collect matrix data on a large scale with the participation of the non-expert students, or “citizen scientists.” Crowdsourcing, or data collection by non-experts, frequently via the internet, has enabled scientists to tackle some large-scale data collection challenges too massive for individuals or scientific teams alone. The quality of work by non-expert crowds is, however, often questioned and little data has been collected on how such crowds perform on complex tasks such as phylogenetic character coding. We studied a crowd of over 600 non-experts, and found that they could use images to identify anatomical similarity (hypotheses of homology) with an average accuracy of 82% compared to scores provided by experts in the field. This performance pattern held across the Tree of Life, from protists to vertebrates. We introduce a procedure that predicts the difficulty of each character and that can be used to assign harder characters to experts and easier characters to a non-expert crowd for scoring. We test this procedure in a controlled experiment comparing crowd scores to those of experts and show that crowds can produce matrices with over 90% of cells scored correctly while reducing the number of cells to be scored by experts by 50%. Preparation time, including image collection and processing, for a crowdsourcing experiment is significant, and does not currently save time of scientific experts overall. However, if innovations in automation or robotics can reduce such effort, then large-scale implementation of our method could greatly increase the collective scientific knowledge of species phenotypes for phylogenetic tree building. For the field of crowdsourcing, we provide a rare study with ground truth, or an experimental control that many studies lack, and contribute new methods on how to coordinate the work of experts and non-experts. We show that there are important instances in which crowd consensus is not a good proxy for correctness

    Appendix 4 Anemones-user-results

    No full text
    Online Appendix 4. Sea anemones user scores. For each crowd member, we report the number of scores they provided and the number that were correct. The “Estimate” column is the probability that this crowd member voted correctly, and the “ci.lower” column gives the 95% lower confidence bound on this probability. These scores are for all characters (evaluation and test)
    corecore