41 research outputs found

    Geology and Geochemistry of Noachian Bedrock and Alteration Events, Meridiani Planum, Mars: MER Opportunity Observations

    Get PDF
    We have used Mars Exploration Rover Opportunity data to investigate the origin and alteration of lithic types along the western rim of Noachian-aged Endeavour crater on Meridiani Planum. Two geologic units are identified along the rim: the Shoemaker and Matijevic formations. The Shoemaker formation consists of two types of polymict impact breccia: clast rich with coarser clasts in upper units; clast-poor with smaller clasts in lower units. Comparisons terrestrial craters show that the lower units represent more distal ejecta from at least two earlier impacts, and the upper units are proximal ejecta from Endeavour crater. Both are mixtures of target rocks of basaltic composition with subtle compositional variations caused by differences in post-impact alteration. The Matijevic formation and lower Shoemaker units represent pre-Endeavour geology, which we equate with the regional Noachian subdued cratered unit. An alteration style unique to these rocks is formation of smectite and Si- and Al-rich vein-like structures crosscutting outcrops. Post-Endeavour alteration is dominated by sulfate formation. Rim-crossing fracture zones include regions of alteration that produced Mg-sulfates as a dominant phase, plausibly closely associated in time with the Endeavour impact. Calcium-sulfate vein formation occurred over extended time, including before the Endeavour impact and after the Endeavour rim had been substantially degraded, likely after deposition of the Burns formation that surrounds and embays the rim. Differences in Mg, Ca and Cl concentrations on rock surfaces and interiors indicate mobilization of salts by transient water that has occurred recently and may be ongoing

    Data from: Crowds replicate performance of scientific experts scoring phylogenetic matrices of phenotypes

    No full text
    Scientists building the Tree of Life face an overwhelming challenge to categorize phenotypes (e.g., anatomy, physiology) from millions of living and fossil species. This biodiversity challenge far outstrips the capacities of trained scientific experts. Here we explore whether crowdsourcing can be used to collect matrix data on a large scale with the participation of the non-expert students, or “citizen scientists.” Crowdsourcing, or data collection by non-experts, frequently via the internet, has enabled scientists to tackle some large-scale data collection challenges too massive for individuals or scientific teams alone. The quality of work by non-expert crowds is, however, often questioned and little data has been collected on how such crowds perform on complex tasks such as phylogenetic character coding. We studied a crowd of over 600 non-experts, and found that they could use images to identify anatomical similarity (hypotheses of homology) with an average accuracy of 82% compared to scores provided by experts in the field. This performance pattern held across the Tree of Life, from protists to vertebrates. We introduce a procedure that predicts the difficulty of each character and that can be used to assign harder characters to experts and easier characters to a non-expert crowd for scoring. We test this procedure in a controlled experiment comparing crowd scores to those of experts and show that crowds can produce matrices with over 90% of cells scored correctly while reducing the number of cells to be scored by experts by 50%. Preparation time, including image collection and processing, for a crowdsourcing experiment is significant, and does not currently save time of scientific experts overall. However, if innovations in automation or robotics can reduce such effort, then large-scale implementation of our method could greatly increase the collective scientific knowledge of species phenotypes for phylogenetic tree building. For the field of crowdsourcing, we provide a rare study with ground truth, or an experimental control that many studies lack, and contribute new methods on how to coordinate the work of experts and non-experts. We show that there are important instances in which crowd consensus is not a good proxy for correctness

    Appendix 4 Anemones-user-results

    No full text
    Online Appendix 4. Sea anemones user scores. For each crowd member, we report the number of scores they provided and the number that were correct. The “Estimate” column is the probability that this crowd member voted correctly, and the “ci.lower” column gives the 95% lower confidence bound on this probability. These scores are for all characters (evaluation and test)
    corecore