61 research outputs found

    On the epistemological similarities of market liberalism and standpoint theory

    Get PDF
    In this paper, we draw attention to the epistemological assumptions of market liberalism and standpoint theory and argue that they have more in common than previously thought. We show that both traditions draw on a similar epistemological bedrock, specifically relating to the fragmentation of knowledge in society and the fact that some of this knowledge cannot easily be shared between agents. We go on to investigate how market liberals and standpoint theorists argue with recourse to these similar foundations, and sometimes diverge, primarily because of normative pre-commitments. One conclusion we draw from this is that these similarities suggest that market liberals ought to, by their own epistemological lights, be more attentive towards various problems raised by feminist standpoint theorists, and feminist standpoint theorists ought to be more open to various claims made by market liberals.Publisher PDFPeer reviewe

    Moral hazards and solar radiation management: evidence from a large-scale online experiment

    Get PDF
    Solar radiation management (SRM) may help to reduce the negative outcomes of climate change by minimising or reversing global warming. However, many express the worry that SRM may pose a moral hazard, i.e., that information about SRM may lead to a reduction in climate change mitigation efforts. In this paper, we report a large-scale preregistered, money-incentivised, online experiment with a representative US sample (N = 2284). We compare actual behaviour (donations to climate change charities and clicks on climate change petition links) as well as stated preferences (support for a carbon tax and self-reported intentions to reduce emissions) between participants who receive information about SRM with two control groups (a salience control that includes information about climate change generally and a content control that includes information about a different topic). Behavioural choices are made with an earned real-money endowment, and stated preference responses are incentivised via the Bayesian Truth Serum. We fail to find a significant impact of receiving information about SRM and, based on equivalence tests, we provide evidence in favour of the absence of a meaningfully large effect. Our results thus provide evidence for the claim that there is no detectable moral hazard with respect to SRM

    Sure-thing vs. probabilistic charitable giving : experimental evidence on the role of individual differences in risky and ambiguous charitable decision-making

    Get PDF
    One of the authors, Philipp Schoenegger, has received a research funding from the Forethought Foundation and the Centre for Effective Altruism (they do not provide grant numbers).Charities differ, among other things, alongside the likelihood that their interventions succeed and produce the desired outcomes and alongside the extent that such likelihood can even be articulated numerically. In this paper, we investigate what best explains charitable giving behaviour regarding charities that have interventions that will succeed with a quantifiable and high probability (sure-thing charities) and charities that have interventions that only have a small and hard to quantify probability of bringing about the desired end (probabilistic charities). We study individual differences in risk/ambiguity attitudes, empathy, numeracy, optimism, and donor type (warm glow vs. pure altruistic donor type) as potential predictors of this choice. We conduct a money incentivised, pre-registered experiment on Prolific on a representative UK sample (n = 1,506) to investigate participant choices (i) between these two types of charities and (ii) about one randomly selected charity. Overall, we find little to no evidence that individual differences predict choices regarding decisions about sure-thing and probabilistic charities, with the exception that a purely altruistic donor type predicts donations to probabilistic charities when participants were presented with a randomly selected charity in (ii). Conducting exploratory equivalence tests, we find that the data provide robust evidence in favour of the absence of an effect (or a negligibly small effect) where we fail to reject the null. This is corroborated by exploratory Bayesian analyses. We take this paper to be contributing to the literature on charitable giving via this comprehensive null-result in pursuit of contributing to a cumulative science.Publisher PDFPeer reviewe

    Diminished diversity-of-thought in a standard large language model

    Get PDF
    We test whether large language models (LLMs) can be used to simulate human participants in social-science studies. To do this, we ran replications of 14 studies from the Many Labs 2 replication project with OpenAI’s text-davinci-003 model, colloquially known as GPT-3.5. Based on our pre-registered analyses, we find that among the eight studies we could analyse, our GPT sample replicated 37.5% of the original results and 37.5% of the Many Labs 2 results. However, we were unable to analyse the remaining six studies due to an unexpected phenomenon we call the “correct answer” effect. Different runs of GPT-3.5 answered nuanced questions probing political orientation, economic preference, judgement, and moral philosophy with zero or near-zero variation in responses: with the supposedly “correct answer.” In one exploratory follow-up study, we found that a “correct answer” was robust to changing the demographic details that precede the prompt. In another, we found that most but not all “correct answers” were robust to changing the order of answer choices. One of our most striking findings occurred in our replication of the Moral Foundations Theory survey results, where we found GPT-3.5 identifying as a political conservative in 99.6% of the cases, and as a liberal in 99.3% of the cases in the reverse-order condition. However, both self-reported ‘GPT conservatives’ and ‘GPT liberals’ showed right-leaning moral foundations. Our results cast doubts on the validity of using LLMs as a general replacement for human participants in the social sciences. Our results also raise concerns that a hypothetical AI-led future may be subject to a diminished diversity of thought

    Artificial intelligence in psychology research

    Full text link
    Large Language Models have vastly grown in capabilities. One potential application of such AI systems is to support data collection in the social sciences, where perfect experimental control is currently unfeasible and the collection of large, representative datasets is generally expensive. In this paper, we re-replicate 14 studies from the Many Labs 2 replication project (Klein et al., 2018) with OpenAI's text-davinci-003 model, colloquially known as GPT3.5. For the 10 studies that we could analyse, we collected a total of 10,136 responses, each of which was obtained by running GPT3.5 with the corresponding study's survey inputted as text. We find that our GPT3.5-based sample replicates 30% of the original results as well as 30% of the Many Labs 2 results, although there is heterogeneity in both these numbers (as we replicate some original findings that Many Labs 2 did not and vice versa). We also find that unlike the corresponding human subjects, GPT3.5 answered some survey questions with extreme homogeneity\unicode{x2013}with zero variation in different runs' responses\unicode{x2013}raising concerns that a hypothetical AI-led future may in certain ways be subject to a diminished diversity of thought. Overall, while our results suggest that Large Language Model psychology studies are feasible, their findings should not be assumed to straightforwardly generalise to the human case. Nevertheless, AI-based data collection may eventually become a viable and economically relevant method in the empirical social sciences, making the understanding of its capabilities and applications central.Comment: 28 pages, 2 visualizations (1 table and 1 figure), preregistered OSF database is available at https://osf.io/dzp8t

    Moral hazards and solar radiation management: Evidence from a large-scale online experiment

    Get PDF
    Solar radiation management (SRM) may help to reduce the negative outcomes of climate change by minimising or reversing global warming. However, many express the worry that SRM may pose a moral hazard, i.e., that information about SRM may lead to a reduction in climate change mitigation efforts. In this paper, we report a large-scale preregistered, money-incentivised, online experiment with a representative US sample (N = 2284). We compare actual behaviour (donations to climate change charities and clicks on climate change petition links) as well as stated preferences (support for a carbon tax and self-reported intentions to reduce emissions) between participants who receive information about SRM with two control groups (a salience control that includes information about climate change generally and a content control that includes information about a different topic). Behavioural choices are made with an earned real-money endowment, and stated preference responses are incentivised via the Bayesian Truth Serum. We fail to find a significant impact of receiving information about SRM and, based on equivalence tests, we provide evidence in favour of the absence of a meaningfully large effect. Our results thus provide evidence for the claim that there is no detectable moral hazard with respect to SRM. [Open access

    Addressing climate change with behavioral science: a global intervention tournament in 63 countries

    Get PDF
    Effectively reducing climate change requires marked, global behavior change. However, it is unclear which strategies are most likely to motivate people to change their climate beliefs and behaviors. Here, we tested 11 expert-crowdsourced interventions on four climate mitigation outcomes: beliefs, policy support, information sharing intention, and an effortful tree-planting behavioral task. Across 59,440 participants from 63 countries, the interventions’ effectiveness was small, largely limited to nonclimate skeptics, and differed across outcomes: Beliefs were strengthened mostly by decreasing psychological distance (by 2.3%), policy support by writing a letter to a future-generation member (2.6%), information sharing by negative emotion induction (12.1%), and no intervention increased the more effortful behavior—several interventions even reduced tree planting. Last, the effects of each intervention differed depending on people’s initial climate beliefs. These findings suggest that the impact of behavioral climate interventions varies across audiences and target behaviors

    Addressing climate change with behavioral science:A global intervention tournament in 63 countries

    Get PDF
    Effectively reducing climate change requires marked, global behavior change. However, it is unclear which strategies are most likely to motivate people to change their climate beliefs and behaviors. Here, we tested 11 expert-crowdsourced interventions on four climate mitigation outcomes: beliefs, policy support, information sharing intention, and an effortful tree-planting behavioral task. Across 59,440 participants from 63 countries, the interventions' effectiveness was small, largely limited to nonclimate skeptics, and differed across outcomes: Beliefs were strengthened mostly by decreasing psychological distance (by 2.3%), policy support by writing a letter to a future-generation member (2.6%), information sharing by negative emotion induction (12.1%), and no intervention increased the more effortful behavior-several interventions even reduced tree planting. Last, the effects of each intervention differed depending on people's initial climate beliefs. These findings suggest that the impact of behavioral climate interventions varies across audiences and target behaviors.</p

    Addressing climate change with behavioral science:A global intervention tournament in 63 countries

    Get PDF
    Effectively reducing climate change requires marked, global behavior change. However, it is unclear which strategies are most likely to motivate people to change their climate beliefs and behaviors. Here, we tested 11 expert-crowdsourced interventions on four climate mitigation outcomes: beliefs, policy support, information sharing intention, and an effortful tree-planting behavioral task. Across 59,440 participants from 63 countries, the interventions' effectiveness was small, largely limited to nonclimate skeptics, and differed across outcomes: Beliefs were strengthened mostly by decreasing psychological distance (by 2.3%), policy support by writing a letter to a future-generation member (2.6%), information sharing by negative emotion induction (12.1%), and no intervention increased the more effortful behavior-several interventions even reduced tree planting. Last, the effects of each intervention differed depending on people's initial climate beliefs. These findings suggest that the impact of behavioral climate interventions varies across audiences and target behaviors.</p

    Addressing climate change with behavioral science:A global intervention tournament in 63 countries

    Get PDF
    • 

    corecore