12 research outputs found

    The other race effect in perception and recognition: Insights from the complete composite task

    Get PDF
    This article may not exactly replicate the final version published in the APA journal. It is not the copy of record.People are more accurate at recognizing faces of their own race than faces from other races, a phenomenon known as the other-race effect. Other-race effects have also been reported in some perceptual tasks. Across 3 experiments, White and Chinese participants completed recognition tests as well as the complete paradigm of the composite task, which measures participants’ abilities to selectively attend to the target region of a face while ignoring the task-irrelevant region of the face. Each task was completed with both own- and other-race faces. At a group level, participants showed significant own-race effects in recognition, but not in the composite task. At an individual difference level, the results provided no support for the hypothesis that a deficit in holistic processing for other-race faces drives the other-race effect in recognition. We therefore conclude that the other-race effect in recognition is not driven by the processes that underpin the composite effect.Australian Research Counci

    Investigating variation in replicability

    Get PDF
    Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects – flag priming influencing conservatism and currency priming influencing system justification – did not replicate. We compared whether the conditions such as lab versus online or US versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect

    Many Labs 2: Investigating Variation in Replicability Across Samples and Settings

    Get PDF
    We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely highpowered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.UCR::Vicerrectoría de Investigación::Unidades de Investigación::Ciencias Sociales::Instituto de Investigaciones Psicológicas (IIP

    Cross-Cultural Consistency and Relativity in the Enjoyment of Thinking Versus Doing

    No full text
    Which is more enjoyable: trying to think enjoyable thoughts or doing everyday solitary activities? Wilson et al. (2014) found that American participants much preferred solitary everyday activities, such as reading or watching TV, to thinking for pleasure. To see whether this preference generalized outside of the United States, we replicated the study with 2,557 participants from 12 sites in 11 countries. The results were consistent in every country: Participants randomly assigned to do something reported significantly greater enjoyment than did participants randomly assigned to think for pleasure. Although we found systematic differences by country in how much participants enjoyed thinking for pleasure, we used a series of nested structural equation models to show that these differences were fully accounted for by country-level variation in 5 individual differences, 4 of which were positively correlated with thinking for pleasure (need for cognition, openness to experience, meditation experience, and initial positive affect) and 1 of which was negatively correlated (reported phone usage)

    Investigating variation in replicability: A many labs replication project

    No full text
    Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently. One effect - imagined contact reducing prejudice - showed weak support for replicability. And two effects - flag priming influencing conservatism and currency priming influencing system justification - did not replicate. We compared whether the conditions such as lab versus online or US versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect. © 2014

    Data from investigating variation in replicability: A "Many Labs" replication project

    Get PDF
    This dataset is from the Many Labs Replication Project in which 13 effects were replicated across 36 samples and over 6,000 participants. Data from the replications are included, along with demographic variables about the participants and contextual information about the environment in which the replication was conducted. Data were collected in-lab and online through a standardized procedure administered via an online link. The dataset is stored on the Open Science Framework website. These data could be used to further investigate the results of the included 13 effects or to study replication and generalizability more broadly. Keywords: replication, generalizability, contex

    Theory building through replication response to commentaries on the "Many labs" replication project

    No full text
    Responds to the comments made by Monin and Oppenheimer (see record 2014-37961-001), Ferguson et al. (see record 2014-38072-001), Crisp et al. (see record 2014-38072-002), and Schwarz & Strack (see record 2014-38072-003) on the current authors original article (see record 2014-20922-002). The current authors thank the commentators for their productive discussion of the Many Labs project. They entirely agree with the main theme across the commentaries: direct replication does not guarantee that the same effect was tested. As noted by Nosek and Lakens (2014, p. 137), ‘‘direct replication is the attempt to duplicate the conditions and procedure that existing theory and evidence anticipate as necessary for obtaining the effect.’’ Attempting to do so does not guarantee success, but it does provide substantial opportunity for theoretical development building on empirical evidence
    corecore