24 research outputs found
Memory Errors Reveal a Bias to Spontaneously Generalize to Categories
Much evidence suggests that, from a young age, humans are able to generalize information learned about a subset of a category to the category itself. Here, we propose thatâbeyond simply being able to perform such generalizationsâpeople are biased to generalize to categories, such that they routinely make spontaneous, implicit category generalizations from information that licenses such generalizations. To demonstrate the existence of this bias, we asked participants to perform a task in which category generalizations would distract from the main goal of the task, leading to a characteristic pattern of errors. Specifically, participants were asked to memorize two types of novel facts: quantified facts about sets of kind members (e.g., facts about all or many stups) and generic facts about entire kinds (e.g., facts about zorbs as a kind). Moreover, half of the facts concerned properties that are typically generalizable to an animal kind (e.g., eating fruits and vegetables), and half concerned properties that are typically more idiosyncratic (e.g., getting mud in their hair). We predicted thatâbecause of the hypothesized biasâparticipants would spontaneously generalize the quantified facts to the corresponding kinds, and would do so more frequently for the facts about generalizable (rather than idiosyncratic) properties. In turn, these generalizations would lead to a higher rate of quantifiedâtoâgeneric memory errors for the generalizable properties. The results of four experiments (NÂ =Â 449) supported this prediction. Moreover, the same generalizableâversusâidiosyncratic difference in memory errors occurred even under cognitive load, which suggests that the hypothesized bias operates unnoticed in the background, requiring few cognitive resources. In sum, this evidence suggests the presence of a powerful bias to draw generalizations about kinds.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/112247/1/cogs12189.pd
Do Lions Have Manes? For Children, Generics Are About Kinds Rather Than Quantities
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90604/1/j.1467-8624.2011.01708.x.pd
The pipeline project:Pre-publication independent replications of a single laboratory's research pipeline
This crowdsourced project introduces a collaborative approach to improving the reproducibility of scientific research, in which findings are replicated in qualified independent laboratories before (rather than after) they are published. Our goal is to establish a non-adversarial replication process with highly informative final results. To illustrate the Pre-Publication Independent Replication (PPIR) approach, 25 research groups conducted replications of all ten moral judgment effects which the last author and his collaborators had "in the pipeline" as of August 2014. Six findings replicated according to all replication criteria, one finding replicated but with a significantly smaller effect size than the original, one finding replicated consistently in the original culture but not outside of it, and two findings failed to find support. In total, 40% of the original findings failed at least one major replication criterion. Potential ways to implement and incentivize pre-publication independent replication on a large scale are discussed. (C) 2015 The Authors. Published by Elsevier Inc.</p
Data from a pre-publication independent replication initiative examining ten moral judgement effects
We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory's research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science
The pipeline project: Pre-publication independent replications of a single laboratory's research pipeline
This crowdsourced project introduces a collaborative approach to improving the reproducibility of scientific research, in which findings are replicated in qualified independent laboratories before (rather than after) they are published. Our goal is to establish a non-adversarial replication process with highly informative final results. To illustrate the Pre-Publication Independent Replication (PPIR) approach, 25 research groups conducted replications of all ten moral judgment effects which the last author and his collaborators had âin the pipelineâ as of August 2014. Six findings replicated according to all replication criteria, one finding replicated but with a significantly smaller effect size than the original, one finding replicated consistently in the original culture but not outside of it, and two findings failed to find support. In total, 40% of the original findings failed at least one major replication criterion. Potential ways to implement and incentivize pre-publication independent replication on a large scale are discussed
Crowdsourcing hypothesis tests: Making transparent how design choices shape research results
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div
Data from a pre-publication independent replication initiative examining ten moral judgement effects
We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory's research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science.Link_to_subscribed_fulltex
The pipeline project: Pre-publication independent replications of a single laboratory's research pipeline
Ă© 2015 The Authors This crowdsourced project introduces a collaborative approach to improving the reproducibility of scientific research, in which findings are replicated in qualified independent laboratories before (rather than after) they are published. Our goal is to establish a non-adversarial replication process with highly informative final results. To illustrate the Pre-Publication Independent Replication (PPIR) approach, 25 research groups conducted replications of all ten moral judgment effects which the last author and his collaborators had Ăąin the pipelineĂą as of August 2014. Six findings replicated according to all replication criteria, one finding replicated but with a significantly smaller effect size than the original, one finding replicated consistently in the original culture but not outside of it, and two findings failed to find support. In total, 40% of the original findings failed at least one major replication criterion. Potential ways to implement and incentivize pre-publication independent replication on a large scale are discussed.Link_to_subscribed_fulltex
The inherence bias in preschoolersâ explanations for achievement differences: replication and extension
Abstract Two studies examined how preschoolers (Nâ=â610; French) explain differences in achievement. Replicating and extending previous research, the results revealed that children invoke more inherent factors (e.g., intelligence) than extrinsic factors (e.g., access to educational resources) when explaining why some children do better in school than others. This inherence bias in explanation can contribute to inequalities in education (e.g., the early-emerging disparities based on social class) by portraying them as fair and legitimate even when they are not
The inherence bias in preschoolersâ explanations for achievement differences: replication and extension
International audienceTwo studies examined how preschoolers (N = 610; French) explain differences in achievement. Replicating and extending previous research, the results revealed that children invoke more inherent factors (e.g., intelligence) than extrinsic factors (e.g., access to educational resources) when explaining why some children do better in school than others. This inherence bias in explanation cancontribute to inequalities in education (e.g., the early-emerging disparities based on social class) byportraying them as fair and legitimate even when they are not