7 research outputs found

    Is Anchoring on Estimates of Severity an Adaptive Heuristic?

    Get PDF
    Decisions to either to prepare or not prepare for weather threats involve uncertainty. Uncertainty in decision making often involves the potential for making either a false positive (preparing for a storm that never arrives) or a false negative error (not preparing for a real storm). Error Management Theory (EMT; Haselton & Buss, 2000) posits that, depending on the uncertain context, people select a decision-making strategy that favors one error over the other. Related to weather, research has shown that people prefer a false positive, or an overestimation (Joslyn et al., 2011). Particularly, this overestimation appears when people receive severe information prior to making a judgment. Thus, the present study tested whether or not the quality of severity influenced people to select a bias towards a false positive error. In two studies, participants made judgments about Friday’s weather after viewing nine different sequences of two forecasts (sunny, cloudy, or stormy) from early in the week (Study 1) or after viewing weather forecasts from Monday and Wednesday (Study 2). In both studies, participants tended to base their judgments on the second forecast. The interpretation of this pattern, however, differs between the two studies based on anchor-type. In Study 1, bias toward the second forecast was the best available, least biased decision-making strategy. In Study 2, however, bias toward the second forecast was irrational because Wednesday’s weather is not informative for Friday’s weather. Thus, Study 2 demonstrated an anchoring-like bias

    Many Labs 2: Investigating Variation in Replicability Across Samples and Settings

    Get PDF
    We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely highpowered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied.UCR::Vicerrectoría de Investigación::Unidades de Investigación::Ciencias Sociales::Instituto de Investigaciones Psicológicas (IIP

    Culture, COVID-19, and collectivism: A paradox of American exceptionalism?

    No full text
    Do geographic differences in collectivism relate to COVID-19 case and death rates? And if so, would they also replicate across states within arguably the most individualistic country in the world-the United States? Further still, what role might the U.S.'s history of ethnic strife and race-based health disparities play in either reinforcing or undermining state-level relations between collectivism and COVID-19 rates? To answer these questions, we examined archival data from 98 countries (Study 1) and the 48 contiguous United States (Study 2) on country/state-level collectivism, COVID-19 case/death rates, relevant covariates (per-capita GDP, population density, spatial dependence), and in the U.S., percent of non-Whites. In Study 1, country-level collectivism negatively related to both cases (r&nbsp;=&nbsp;-0.28) and deaths (r&nbsp;=&nbsp;-0.40) in simple regressions; however, after controlling for covariates, the former became non-significant (r p&nbsp;=&nbsp;-0.07), but the latter remained significant (r p&nbsp;=&nbsp;-0.20). In Study 2, state-level collectivism positively related to both cases (r&nbsp;=&nbsp;0.56) and deaths (r&nbsp;=&nbsp;0.41) in simple regressions, and these relationships persisted after controlling for all covariates except race, where a state's non-White population dominated all other predictors of COVID-19 cases (r p&nbsp;=&nbsp;0.35) and deaths (r p&nbsp;=&nbsp;0.31). We discuss the strong link between race and collectivism in U.S. culture, and its implications for understanding COVID-19 responses

    Many Labs 2: Investigating Variation in Replicability Across Samples and Settings

    No full text
    We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p &lt; .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p &lt; .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (&lt; 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied

    Many Labs 2: Investigating Variation in Replicability Across Samples and Settings

    No full text
    We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied

    Many Labs 2: Investigating Variation in Replicability Across Samples and Settings

    No full text
    We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p &lt; .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p &lt; .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (&lt; 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied
    corecore