10 research outputs found

    The Accuracy-Enhancing Effect Of Biasing Cues

    No full text
    Extrinsic cues such as price and irrelevant attributes have been shown to bias consumers\u27 product judgments. Results in this article replicate those findings in pretrial judgments but show that such biasing cues can improve quality judgments at a later point in time. Initially biasing cues can even yield more accurate judgments than cues that do not bias pretrial judgments and can help consumers after a delay (e.g., at the time of repeat purchase) to determine how much they had liked a product when they tried it before. These results suggest that trying to deceive consumers with the use of biasing cues may induce trial in the short term but may come back to haunt the deceiver at the time of repeat purchase. © 2009

    Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans

    Get PDF
    Although companies increasingly are adopting algorithms for consumer-facing tasks (e.g., application evaluations), little research has compared consumers’ reactions to favorable decisions (e.g., acceptances) versus unfavorable decisions (e.g., rejections) about themselves that are made by an algorithm versus a human. Ten studies reveal that, in contrast to managers’ predictions, consumers react less positively when a favorable decision is made by an algorithmic (vs. a human) decision maker, whereas this difference is mitigated for an unfavorable decision. The effect is driven by distinct attribution processes: it is easier for consumers to internalize a favorable decision outcome that is rendered by a human than by an algorithm, but it is easy to externalize an unfavorable decision outcome regardless of the decision maker type. The authors conclude by advising managers on how to limit the likelihood of less positive reactions toward algorithmic (vs. human) acceptances

    Points of (Dis)parity: Expectation Disconfirmation from Common Attributes in Consumer Choice

    No full text
    Whereas many theories of decision making predict that presenting or not presenting common features of choice alternatives should not affect choice, in this research we show that common features can be a powerful driver of choice behavior. We conjecture that consumers often hold expectations about the features the choice alternatives have in common, and demonstrate that presenting (vs. omitting) a common feature increases the choice probability of the alternative that would have been expected to perform worse on the common feature. This effect occurs because performance on the common feature is judged not at face value, but relative to an expectation about which product should perform best on that feature. The effect obtains despite the fact that performance on the common feature is clearly the same when alternatives are presented side by side. Finally, we demonstrate four boundary conditions of our effect

    Thumbs Up or Down: Consumer Reactions to Decisions by Algorithms Versus Humans

    No full text
    Although companies increasingly are adopting algorithms for consumer-facing tasks (e.g., application evaluations), little research has compared consumers’ reactions to favorable decisions (e.g., acceptances) versus unfavorable decisions (e.g., rejections) about themselves that are made by an algorithm versus a human. Ten studies reveal that, in contrast to managers’ predictions, consumers react less positively when a favorable decision is made by an algorithmic (vs. a human) decision maker, whereas this difference is mitigated for an unfavorable decision. The effect is driven by distinct attribution processes: it is easier for consumers to internalize a favorable decision outcome that is rendered by a human than by an algorithm, but it is easy to externalize an unfavorable decision outcome regardless of the decision maker type. The authors conclude by advising managers on how to limit the likelihood of less positive reactions toward algorithmic (vs. human) acceptances
    corecore