17 research outputs found

    The Replication Database:Documenting the Replicability of Psychological Science

    Get PDF
    In psychological science, replicability—repeating a study with a new sample achieving consistent results (Parsons et al., 2022)—is critical for affirming the validity of scientific findings. Despite its importance, replication efforts are few and far between in psychological science with many attempts failing to corroborate past findings. This scarcity, compounded by the difficulty in accessing replication data, jeopardizes the efficient allocation of research resources and impedes scientific advancement. Addressing this crucial gap, we present the Replication Database (https://forrt-replications.shinyapps.io/fred_explorer), a novel platform hosting 1,239 original findings paired with replication findings. The infrastructure of this database allows researchers to submit, access, and engage with replication findings. The database makes replications visible, easily findable via a graphical user interface, and tracks replication rates across various factors, such as publication year or journal. This will facilitate future efforts to evaluate the robustness of psychological research

    The Open Anchoring Quest Dataset: Anchored Estimates from 96 Studies on Anchoring Effects

    Get PDF
    People’s estimates are biased toward previously considered numbers (anchoring). We have aggregated all available data from anchoring studies that included at least two anchors into one large dataset. Data were standardized to comprise one estimate per row, coded according to a wide range of variables, and are available for download and analyses online (https://metaanalyses.shinyapps.io/OpAQ/). Because the dataset includes both original and meta-data it allows for fine-grained analyses (e.g., correlations of estimates for different tasks) but also for meta-analyses (e.g., effect sizes for anchoring effects)

    Quantity Estimation Database

    No full text

    Algorithm Familiarity

    No full text
    Set of experiments conducted to investigate whether participants appreciate the advice of algorithms more when they are (more) familiar with them

    Colored Squares Database

    No full text

    Anchoring & Adjustment / Time Pressure (ExPra 2 - WiSe 2021/22)

    No full text
    Unpublished raw data on the anchoring effect collected within the course "Experimentalpsychologisches Praktikum 2" at the University of Tübingen in the winter term 2021/202

    Bimodal Flanker: Audio vs. Visual

    No full text

    Unexpected Receipt of Advice

    No full text
    Investigation of the effects of advice expectation on advice weighting

    Navigating Anchor Relevance Skillfully: Expertise Reduces Susceptibility to Anchoring Effects

    No full text
    50 years ago Tversky and Kahneman (1974) described anchoring as the phenomenon whereby an irrelevant numerical value influences a subsequent numerical judgment. Although expertise strongly influences the accuracy of judgments, its role in anchoring is still unclear with findings of reduced, similar, and stronger anchoring in experts compared to novices. Moreover, three prominent theories of anchoring, i.e., the Insufficient Adjustment Model, the Selective Accessibility Model, and the Scale Distortion Theory, make different predictions regarding the influence of expertise on anchoring. To address this inconsistency and to test these theories of anchoring against each other, we manipulate individuals' expertise prior to a perceptual estimation task. Additionally, we manipulate anchor relevance and anchor extremity. In two preregistered experiments, we find that experts do indeed show less anchoring compared to novices, and that more extreme anchors lead to stronger anchoring effects. However, we do not find an effect of anchor relevance in either experiment. These results add to the growing body of literature showing that expertise reduces anchoring effects. Morevoer, although no anchoring theory is clearly supported in the experiments, the results are mostly consistent with the Insufficient Adjustment Model assuming ranges of plausible values and the Scale Distortion Theory. Our findings highlight the importance of expertise in judgments in general and anchoring in particular. Thus, theories of anchoring should take expertise into account as a strong inhibitor of anchoring effects

    The advice less taken: The consequences of receiving unexpected advice

    No full text
    Although new information technologies and social networks make a wide variety of opinions and advice easily accessible, one can never be sure to get support on a focal judgment task. Nevertheless, participants in traditional advice taking studies are by default informed in advance about the opportunity to revise their judgment in the light of advice. The expectation of advice, however, may affect the weight assigned to it. The present research therefore investigates whether the advice taking process depends on the expectation of advice in the judge-advisor system (JAS). Five preregistered experiments (total N = 2019) compared low and high levels of advice expectation. While there was no evidence for expectation effects in three experiments with block-wise structure, we obtained support for a positive influence of advice expectation on advice weighting in two experiments implementing sequential advice taking. The paradigmatic disclosure of the full procedure to participants thus constitutes an important boundary condition for the ecological study of advice taking behavior. The results suggest that the conventional JAS procedure fails to capture a class of judgment processes where advice is unexpected and therefore weighted less
    corecore