3 research outputs found

    People see more of their biases in algorithms

    Get PDF
    Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained. We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions. Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs. By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants. Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms. Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self. Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews). Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves. Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions

    Acceptability lies in the eye of the beholder:Self-other biases in GenAI collaborations

    Get PDF
    Since the release of ChatGPT, heated discussions have focused on the acceptable uses of generative artificial intelligence (GenAI) in education, science, and business practices. A salient question in these debates pertains to perceptions of the extent to which creators contribute to the co-produced output. As the current research establishes, the answer to this question depends on the evaluation target. Nine studies (seven preregistered, total N = 4498) document that people evaluate their own contributions to co-produced outputs with ChatGPT as higher than those of others. This systematic self–other difference stems from differential inferences regarding types of GenAI usage behavior: People think that they predominantly use GenAI for inspiration, but others use it to outsource work. These self–other differences in turn have direct ramifications for GenAI acceptability perceptions, such that usage is considered more acceptable for the self than for others. The authors discuss the implications of these findings for science, education, and marketing.</p

    People see more of their biases in algorithms

    Get PDF
    Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained. We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions. Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs. By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants. Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms. Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self. Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews). Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves. Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions
    corecore