6 research outputs found

    The impact of doubt on the experience of regret

    Get PDF
    Decisions often produce considerable levels of doubt and regret, yet little is known about how these experiences are related. In six sets of studies (and two pilot-studies; total N = 2268), we consistently find that doubts arising after a decision (i.e., when people start questioning whether they made the correct decision) intensify regret via increased feelings of blame for having made a poor choice. These results are consistent with decision justification theory (Connolly & Zeelenberg, 2002) and regret regulation theory (Zeelenberg & Pieters, 2007), yet inconsistent with subjective expected pleasure theory (SEP; Mellers, Schwartz, & Ritov, 1999). That is, SEP would have predicted less regret as those who already doubted their decision should be less surprised when learning that their decision indeed could have been better (as compared to those who were certain that they made the correct decision). We find mixed results for the effect of post-decisional doubt on the experience of relief and no support for a relationship between a person’s degree of doubt before a decision and the intensity of regret. Implications and future directions are discussed

    Taking Algorithmic (Vs. Human) Advice Reveals Different Goals to Others

    Get PDF
    People are increasingly likely to obtain advice from algorithms. But what does taking advice from an algorithm (as opposed to a human) reveal to others about the advice seekers’ goals? In five studies (total N = 1927), we find that observers attribute the primary goal that an algorithm is designed to pursue in a situation to advice seekers. As a result, when explaining advice seekers’ subsequent behaviors and decisions, primarily this goal is taken into account, leaving less room for other possible motives that could account for people’s actions. Such secondary goals are, however, more readily taken into account when (the same) advice comes from human advisors, leading to different judgments about advice seekers’ motives. Specifically, advice seekers’ goals were perceived differently in terms of fairness, profit-seeking, and prosociality depending on whether the obtained advice came from an algorithm or another human. We find that these differences are in part guided by the different expectations people have of the type of information that algorithmic- vs. human advisors take into account when making their recommendations. The presented work has implications for (algorithmic) fairness perceptions and human-computer interaction.</p

    Leader decision speed as a signal of honesty

    No full text
    Decision speed is emerging as an important topic to organizations, yet its consequences for leaders have received little research attention. The present research builds on this notion by examining how the speed with which leaders come to decisions shapes observers' perceptions and behaviors. In three incentivized experiments, participants evaluated leaders who decided whether to include or exclude followers from participating in consequential decisions. Leaders were seen as more honest when they were fast (vs. slow) to include followers, but as less honest when they were fast (vs. slow) to exclude followers from decisions. These perceptions influenced several key outcomes: the willingness to reward leaders (Experiment 1) and the willingness to cooperate with leaders (Experiment 3). Consistent with a signaling perspective, these effects disappeared when observers learned that leaders were externally pressed to decide quickly or slowly (Experiment 2). The present research offers new insights into the cues that people use when judging leaders' decision-making processes, and the behavioral consequences of these judgments

    Leader decision speed as a signal of honesty

    No full text
    Decision speed is emerging as an important topic to organizations, yet its consequences for leaders have received little research attention. The present research builds on this notion by examining how the speed with which leaders come to decisions shapes observers' perceptions and behaviors. In three incentivized experiments, participants evaluated leaders who decided whether to include or exclude followers from participating in consequential decisions. Leaders were seen as more honest when they were fast (vs. slow) to include followers, but as less honest when they were fast (vs. slow) to exclude followers from decisions. These perceptions influenced several key outcomes: the willingness to reward leaders (Experiment 1) and the willingness to cooperate with leaders (Experiment 3). Consistent with a signaling perspective, these effects disappeared when observers learned that leaders were externally pressed to decide quickly or slowly (Experiment 2). The present research offers new insights into the cues that people use when judging leaders' decision-making processes, and the behavioral consequences of these judgments

    Taking a Disagreeing Perspective Improves the Accuracy of People’s Quantitative Estimates

    No full text
    Many decisions rest on people’s ability to make estimates of unknown quantities. In these judgments, the aggregate estimate of a crowd of individuals is often more accurate than most individual estimates. Remarkably, similar principles apply when multiple estimates from the same person are aggregated, and a key challenge is to identify strategies that improve the accuracy of people’s aggregate estimates. Here, we present the following strategy: Combine people’s first estimate with their second estimate, made from the perspective of someone they often disagree with. In five preregistered experiments (N = 6,425 adults; N = 53,086 estimates) with populations from the United States and United Kingdom, we found that such a strategy produced accurate estimates (compared with situations in which people made a second guess or when second estimates were made from the perspective of someone they often agree with). These results suggest that disagreement, often highlighted for its negative impact, is a powerful tool in producing accurate judgments

    Predictive maintenance for industry 5.0: behavioural inquiries from a work system perspective

    No full text
    Predictive Maintenance (PdM) solutions assist decision-makers by predicting equipment health and scheduling maintenance actions, but their implementation in industry remains problematic. Specifically, prior research repeatedly indicates that decision-makers often refuse to adopt the data-driven, system-generated advice in their working procedures. In this paper, we address these acceptance issues by studying how PdM implementation changes the nature of decision-makers’ work and how these changes affect their acceptance of PdM systems. We build on the human-centric Smith-Carayon Work System model to synthesise literature from research areas where system acceptance has been explored in more detail. Consequently, we expand the maintenance literature by investigating the human-, task-, and organisational characteristics of PdM implementation. Following the literature review, we distil ten propositions regarding decision-making behaviour in PdM settings. Next, we verify each proposition’s relevance through in-depth interviews with experts from both academia and industry. Based on the propositions and interviews, we identify four factors that facilitate PdM adoption: trust between decision-maker and model (maker), control in the decision-making process, availability of sufficient cognitive resources, and proper organisational allocation of decision-making. Our results contribute to a fundamental understanding of acceptance behaviour in a PdM context and provide recommendations to increase the effectiveness of PdM implementations
    corecore