63 research outputs found

    Knowing that you matter, matters! The Interplay of Meaning, Monetary Incentives, and Worker Recognition

    Get PDF
    __Abstract__ We manipulate workers' perceived meaning of a job in a field experiment. Half of the workers are informed that their job is important, the other half are told that their job is of no relevance. Results show that workers exert more effort when meaning is high, corroborating previous findings on the relationship between meaning and work effort. We then compare the effect of meaning to the effect of monetary incentives and of worker recognition via symbolic awards. We also look at interaction effects. While meaning outperforms monetary incentives, the latter have a robust positive effect on performance that is independent of meaning. In contrast, meaning and recognition have largely similar effects but interact negatively. Our results are in line with image-reward theory (Benabou and Tirole 2006) and suggest that meaning and worker recognition operate via the same channel, namely image seeking

    Coordination in Networks Formation: Experimental Evidence on Learning and Salience

    Full text link

    Trust in a bottle

    No full text

    The terminator of social welfare? : The economic consequences of algorithmic discrimination

    No full text
    Using experimental data from a comprehensive field study, we explore the causal effects of algorithmic discrimination on economic efficiency and social welfare. We harness economic, game-theoretic, and state-of-the-art machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes assessing economic downstream consequences of algorithmic discrimination. This way, we are able to precisely quantify downstream efficiency and welfare ramifications, which provides us a unique opportunity to assess whether the introduction of an AI system is actually desirable. Our results highlight that AI systems’ capabilities in enhancing welfare critically depends on the degree of inherent algorithmic biases. While an unbiased system in our setting outperforms humans and creates substantial welfare gains, the positive impact steadily decreases and ultimately reverses the more biased an AI system becomes. We show that this relation is particularly concerning in selective-labels environments, i.e., settings where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled, because commonly used technical performance metrics like the precision measure are prone to be deceptive. Finally, our results depict that continued learning, by creating feedback loops, can remedy algorithmic discrimination and associated negative effects over time
    corecore