88 research outputs found

    People see more of their biases in algorithms

    Get PDF
    Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained. We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions. Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs. By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants. Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms. Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self. Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews). Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves. Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions

    The least likely act: Overweighting atypical past behavior in behavioral predictions.

    Get PDF
    Abstract When people predict the future behavior of a person, thinking of that target as an individual decreases the accuracy of their predictions. The present research examined one potential source of this bias, whether and why predictors overweight the atypical past behavior of individuals. The results suggest that predictors do indeed overweight the atypical past behavior of an individual. Atypical past behavior is more cognitively accessible than typical past behavior, which leads it to be overweighted in the impressions that serve as the basis for their predictions. Predictions for group members appear less susceptible to this bias, presumably because predictors are less likely to form a coherent impression of a group than an individual before making their predictions

    Which social comparisons influence happiness with unequal pay?

    Full text link
    We examine which social comparisons most affect happiness with pay that is unequally distributed (e.g., salaries and bonuses). We find that ensemble representation-attention to statistical properties of distributions such as their range and mean-makes the proximal extreme (i.e., the maximum or minimum) and distribution mean salient social comparison standards. Happiness with a salary or bonus is more affected by how it compares to the distribution mean and proximal extreme than by exemplar-based properties of the payment, like its comparison to the nearest payment or its distribution rank. This holds for randomly assigned and performance-based payments. Process studies demonstrate that ensemble representations lead people to spontaneously select these statistical properties of pay distributions as comparison standards. Exogenously increasing the salience of less extreme exemplars moderates the influence of the maximum on happiness with pay, but exogenously increasing the salience of the distribution maximum does not. As with other social comparison standards, top-down information moderates their selection. Happiness with a bonus payment is influenced by the largest payment made to others who solve the same math problems, for instance, but not by the largest payment made to others who solve different verbal problems. Our findings yield theoretical and practical insights about which members of groups are selected as social comparison standards, effects of relative income on happiness, and the attentional processes involved in ensemble representation. (PsycInfo Database Record (c) 2020 APA, all rights reserved).Accepted manuscrip

    More Intense Experiences, Less Intense Forecasts: Why People Overweight Probability Specifications in Affective Forecasts

    Get PDF
    We propose that affective forecasters overestimate the extent to which experienced hedonic responses to an outcome are influenced by the probability of its occurrence. The experience of an outcome (e.g., winning a gamble) is typically more affectively intense than the simulation of that outcome (e.g., imagining winning a gamble) upon which the affective forecast for it is based. We suggest that, as a result, experiencers allocate a larger share of their attention toward the outcome (e.g., winning the gamble) and less to its probability specifications than do affective forecasters. Consequently, hedonic responses to an outcome are less sensitive to its probability specifications than are affective forecasts for that outcome. The results of 6 experiments provide support for our theory. Affective forecasters overestimated how sensitive experiencers would be to the probability of positive and negative outcomes (Experiments 1 and 2). Consistent with our attentional account, differences in sensitivity to probability specifications disappeared when the attention of forecasters was diverted from probability specifications (Experiment 3) or when the attention of experiencers was drawn toward probability specifications (Experiment 4). Finally, differences in sensitivity to probability specifications between forecasters and experiencers were diminished when the forecasted outcome was more affectively intense (Experiments 5 and 6)

    Preference for human, not algorithm aversion

    Full text link
    People sometimes exhibit a costly preference for humans relative to algorithms, which is often defined as a domain-general algorithm aversion. I propose it is instead driven by biased evaluations of self and other humans, which occurs more narrowly in domains where identity is threatened and when evaluative criteria are ambiguous.Accepted manuscript2023-04-3

    Negativity bias in attribution of external agency

    Full text link
    This research investigated whether people are more likely to attribute events to external agents when events are negative rather than neutral or positive. Participants more often believed that ultimatum game partners were humans rather than computers when the partners offered unusually unfavorable divisions than unusually favorable divisions (Experiment 1A), even when their human partners had no financial stake in the game (Experiment 1B). In subsequent experiments, participants were most likely to infer that gambles were influenced by an impartial participant when the outcomes of those gambles were losses rather than wins (Experiments 2 and 3), despite their explicitly equal probability. The results suggest a negative agency bias—negative events are more often attributed to the influence of external agents than similarly positive and neutral events, independent of their subjective probability

    Utility: Anticipated, Experienced, and Remembered

    Full text link
    • …
    corecore