4 research outputs found

    The efficient measurement of individual differences in meaning motivation: The need for sense-making short form

    Get PDF
     People differ in the extent to which they express a need for sense-making (NSM), and these individual differences are important to understand in light of meaning-making processes. To quantify this important variable, we originally proposed a need for sense-making scale. We now propose a refined, similarly reliable short version of the scale (NSM-SF). The 7-item NSM-SF was validated across a series of four studies (combined N = 1,243). NSM-SF showed psychometric properties and correlations consistent with its longer forerunner. Additionally, results indicated that the need for sense-making was moderately positively related to the satisfaction of basic psychological needs (autonomy, relatedness and competence), and it related negatively to the frustration of these needs. The research offers a useful, brief tool for assessing the NSM construct and broadens our understanding of basic psychological motivations. </p

    Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

    No full text
    Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity?variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity?estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs?indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis

    Competition and moral behavior: A meta-analysis of forty-five crowd-sourced experimental designs

    No full text
    Does competition affect moral behavior? This fundamental question has been debated among leading scholars for centuries, and more recently, it has been tested in experimental studies yielding a body of rather inconclusive empirical evidence. A potential source of ambivalent empirical results on the same hypothesis is design heterogeneity—variation in true effect sizes across various reasonable experimental research protocols. To provide further evidence on whether competition affects moral behavior and to examine whether the generalizability of a single experimental study is jeopardized by design heterogeneity, we invited independent research teams to contribute experimental designs to a crowd-sourced project. In a large-scale online data collection, 18,123 experimental participants were randomly allocated to 45 randomly selected experimental designs out of 95 submitted designs. We find a small adverse effect of competition on moral behavior in a meta-analysis of the pooled data. The crowd-sourced design of our study allows for a clean identification and estimation of the variation in effect sizes above and beyond what could be expected due to sampling variance. We find substantial design heterogeneity—estimated to be about 1.6 times as large as the average standard error of effect size estimates of the 45 research designs—indicating that the informativeness and generalizability of results based on a single experimental design are limited. Drawing strong conclusions about the underlying hypotheses in the presence of substantive design heterogeneity requires moving toward much larger data collections on various experimental designs testing the same hypothesis
    corecore