8 research outputs found

    No evidence that priming analytic thinking reduces belief in conspiracy theories:A Registered Report of high-powered direct replications of Study 2 and Study 4 from Swami, Voracek, Stieger, Tran, and Furnham (2014)

    Get PDF
    Analytic thinking is reliably associated with lower belief in conspiracy theories. However, evidence for whether increasing analytic thinking can reduce belief in conspiracies is sparse. As an exception to this, Swami et al. (2014) showed that priming analytical thinking through a verbal fluency task (i.e., scrambled sentence task) or a processing fluency manipulation (i.e., difficult-to-read fonts) reduced belief in conspiracy theories. To probe the robustness of these effects, in this Registered Report, we present two highly powered (i.e., 95%) direct replications of two of the original studies (i.e., Studies 2 and 4). We found no evidence that priming analytic thinking through the scrambled sentence task (N = 302), nor the difficult-to-read fonts (N = 488) elicited more analytic thinking, nor reduced belief in conspiracy theories. This work highlights the need for further research to identify effective ways of inducing analytic thinking in order to gauge its potential causal impact on belief in conspiracies

    Tracking variations in daily questionable health behaviors and their psychological roots: a preregistered experience sampling study

    Get PDF
    People resort to various questionable health practices to preserve or regain health - they intentionally do not adhere to medical recommendations (e.g. self-medicate or modify the prescribed therapies; iNAR), or use traditional/complementary/alternative (TCAM) medicine. As retrospective reports overestimate adherence and suffer from recall and desirability bias, we tracked the variations in daily questionable health behaviors and compared them to their retrospectively reported lifetime use. We also preregistered and explored their relations to a wide set of psychological predictors - distal (personality traits and basic thinking dispositions) and proximal (different unfounded beliefs and biases grouped under the term irrational mindset). A community sample (N = 224) tracked daily engagement in iNAR and TCAM use for 14 days, resulting in 3136 data points. We observed a high rate of questionable health practices over the 14 days; daily engagement rates roughly corresponded to lifetime ones. Both iNAR and TCAM were weakly, but robustly positively related. Independent of the assessment method, an irrational mindset was the most important predictor of TCAM use. For iNAR, however, psychological predictors emerged as relevant only when assessed retrospectively. Our study offers insight into questionable health behaviors from both a within and between-person perspective and highlights the importance of their psychological roots. © 2023, Springer Nature Limited

    Thinking Inconsistently: Development and Validation of an Instrument for Assessing Proneness to Doublethink

    No full text
    People tend to simultaneously accept mutually exclusive beliefs. If they are generally prone to tolerate inconsistencies, irrespective of their content, we say they are prone to doublethink. We developed a measure to capture individual differences in this tendency and demonstrated its construct and predictive validity across two studies. In Study 1, participants (N = 240) filled in the doublethink scale, the rational/intuitive inventory, and three measures of conspiratorial beliefs (conspiracy mentality, belief in specific and contradictory conspiracies). Doublethink was meaningfully related to all measured variables and was predictive of all conspiratorial beliefs over and above rational/intuitive thinking styles. In Study 2 (N = 149), we included the need for cognition and preference for consistency in the predictor set alongside doublethink, while the criterion set remained the same. Once again, doublethink related in an expected way to other measured variables and was predictive of belief in conspiracy theories after accounting for the effects of need for cognition and preference for consistency. We discuss the properties of the scale and how it relates to other consistency measures, and offer two ways to conceptualize doublethink: as a lack of metacognitive ability to spot inconsistencies or as a thinking style that easily accommodates inconsistent beliefs

    HPLC behavior and hydrophobic parameters of some anilides

    No full text
    The chromatographic behavior of para substituted anilides of 2,2-dimethylpropanoic, benzoic and alpha -phenyl acetic acid has been studied by reversed-phase high performance liquid chromatography HPLC was performed on a C-18 column with various aqueous methanol mobile phases. The Influence on the retention of anilide type and additional substituents in the molecule is discussed. Several chromatographic hydrophobicity parameters (CHP) have been calculated by linear correlation between log k of the investigated compounds and the concentration of methanol in the mobile phase. The chromatographic hydrophobicity parameters were compared with the log P values calculated by Rekker's fragmental method. The results show moderate correlations of CHIP with log Mus, multiple linear regressions have been applied. It was found that besides log P even the electronic effects of individual polar groups capable of hydrogen bonding proved to be very important in hydrophobic characterization of the molecule

    Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability

    No full text
    Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect (p lt .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δr =.002 or.014, depending on analytic approach). The median effect size for the revised protocols (r =.05) was similar to that of the RP:P protocols (r =.04) and the original RP:P replications (r =.11), and smaller than that of the original studies (r =.37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r =.07, range =.00–.15) were 78% smaller, on average, than the original effect sizes (median r =.37, range =.19–.50)

    Many Labs 2: Investigating Variation in Replicability Across Samples and Settings

    No full text
    We conducted preregistered replications of 28 classic and contemporary published findings, with protocols that were peer reviewed in advance, to examine variation in effect magnitudes across samples and settings. Each protocol was administered to approximately half of 125 samples that comprised 15,305 participants from 36 countries and territories. Using the conventional criterion of statistical significance (p < .05), we found that 15 (54%) of the replications provided evidence of a statistically significant effect in the same direction as the original finding. With a strict significance criterion (p < .0001), 14 (50%) of the replications still provided such evidence, a reflection of the extremely high-powered design. Seven (25%) of the replications yielded effect sizes larger than the original ones, and 21 (75%) yielded effect sizes smaller than the original ones. The median comparable Cohen’s ds were 0.60 for the original findings and 0.15 for the replications. The effect sizes were small (< 0.20) in 16 of the replications (57%), and 9 effects (32%) were in the direction opposite the direction of the original effect. Across settings, the Q statistic indicated significant heterogeneity in 11 (39%) of the replication effects, and most of those were among the findings with the largest overall effect sizes; only 1 effect that was near zero in the aggregate showed significant heterogeneity according to this measure. Only 1 effect had a tau value greater than .20, an indication of moderate heterogeneity. Eight others had tau values near or slightly above .10, an indication of slight heterogeneity. Moderation tests indicated that very little heterogeneity was attributable to the order in which the tasks were performed or whether the tasks were administered in lab versus online. Exploratory comparisons revealed little heterogeneity between Western, educated, industrialized, rich, and democratic (WEIRD) cultures and less WEIRD cultures (i.e., cultures with relatively high and low WEIRDness scores, respectively). Cumulatively, variability in the observed effect sizes was attributable more to the effect being studied than to the sample or setting in which it was studied
    corecore