21 research outputs found

    Social Value Orientation, Expectations, and Cooperation in Social Dilemmas:A Meta-analysis

    Get PDF
    Interdependent situations are pervasive in human life. In these situations, it is essential to form expectations about the others’ behaviour to adapt one’s own behaviour to increase mutual outcomes and avoid exploitation. Social value orientation, which describes the dispositional weights individuals attach to their own and to another person’s outcome, predicts these expectations of cooperation in social dilemmas—an interdependent situation involving a conflict of interests. Yet, scientific evidence is inconclusive about the exact differences in expectations between prosocials, individualists, and competitors. The present meta-analytic results show that, relative to proselfs (individualists and competitors), prosocials expect more cooperation from others in social dilemmas, whereas individualists and competitors do not significantly differ in their expectations. The importance of these expectations in the decision process is further highlighted by the finding that they partially mediate the well-established relation between social value orientation and cooperative behaviour in social dilemmas. In fact, even proselfs are more likely to cooperate when they expect their partner to cooperate

    Eye tracking: empirical foundations for a minimal reporting guideline

    Get PDF
    In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "empirically based minimal reporting guideline")

    Psychologists are open to change, yet wary of rules

    No full text
    Psychologists must change the way they conduct and report their research—this notion has been the topic of much debate in recent years. One article recently published in Psychological Science proposing six requirements for researchers concerning data collection and reporting practices as well as four guidelines for reviewers aimed at improving the publication process has recently received much attention (Simmons, Nelson, & Simonsohn, 2011). We surveyed 1,292 psychologists to address two questions: Do psychologists support these concrete changes to data collection, reporting, and publication practices, and if not, what are their reasons? Respondents also indicated the percentage of print and online journal space that should be dedicated to novel studies and direct replications as well as the percentage of published psychological research that they believed would be confirmed if direct replications were conducted. We found that psychologists are generally open to change. Five requirements for researchers and three guidelines for reviewers were supported as standards of good practice, whereas one requirement was even supported as a publication condition. Psychologists appear to be less in favor of mandatory conditions of publication than standards of good practice.We conclude that the proposal made by Simmons, Nelson & Simonsohn (2011) is a starting point for such standards

    The replication paradox: Combining studies can decrease accuracy of effect size estimates

    Get PDF
    Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may increase bias in effect size estimates. Specifically, we examine the effect of replication on bias in estimated population effect size as a function of publication bias and the studies' sample size or power. We analytically show that incorporating the results of published replication studies will in general not lead to less bias in the estimated population effect size. We therefore conclude that mere replication will not solve the problem of overestimation of effect sizes. We will discuss the implications of our findings for interpreting results of published and unpublished studies, and for conducting and interpreting results of meta-analyses. We also discuss solutions for the problem of overestimation of effect sizes, such as discarding and not publishing small studies with low power, and implementing practices that completely eliminate publication bias (e.g., study registration)

    A critical meta-analysis of Lens Model Studies in human judgment and decision-making

    Get PDF
    Achieving accurate judgment (‘judgmental achievement’) is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping
    corecore