9 research outputs found

    Belief digitization in economic prediction

    Get PDF
    Economic choices depend on our predictions of the future. Yet, at times predictions are not based on all relevant information, but instead on the single most likely possibility, which is treated as though certainly the case— that is, digitally. Two sets of studies test whether this digitization bias would occur in higher-stakes economic contexts. When making predictions about the future asset prices, participants ignored conditional probability information given relatively unlikely events and relied entirely on conditional probabilities given the more likely events. This effect was found for both financial aggregates and individual stocks, for binary predictions about the direction and continuous predictions about expected values, and even when the “unlikely” event explicitly had a probability as high as 30%; further, it was not moderated by investing experience. Implications for behavioral finance are discussed

    Belief digitization:Do we treat uncertainty as probabilities or as bits?

    Get PDF
    Humans are often characterized as Bayesian reasoners. Here, we question the core Bayesian assumption that probabilities reflect degrees of belief. Across eight studies, we find that people instead reason in a digital manner, assuming that uncertain information is either true or false when using that information to make further inferences. Participants learned about 2 hypotheses, both consistent with some information but one more plausible than the other. Although people explicitly acknowledged that the less-plausible hypothesis had positive probability, they ignored this hypothesis when using the hypotheses to make predictions. This was true across several ways of manipulating plausibility (simplicity, evidence fit, explicit probabilities) and a diverse array of task variations. Taken together, the evidence suggests that digitization occurs in prediction because it circumvents processing bottlenecks surrounding people's ability to simulate outcomes in hypothetical worlds. These findings have implications for philosophy of science and for the organization of the mind

    Simplicity and complexity preferences in causal explanation: An opponent heuristic account

    Get PDF
    People often prefer simple to complex explanations because they generally have higher prior probability. However, simpler explanations are not always normatively superior because they often do not account for the data as well as complex explanations. How do people negotiate this trade-off between prior probability (favoring simple explanations) and goodness-of-fit (favoring complex explanations)? Here, we argue that people use opponent heuristics to simplify this problem—that people use simplicity as a cue to prior probability but complexity as a cue to goodness-of-fit. Study 1 finds direct evidence for this claim. In subsequent studies, we examine factors that lead one or the other heuristic to predominate in a given context. Studies 2 and 3 find that people have a stronger simplicity preference in deterministic rather than stochastic contexts, while Studies 4 and 5 find that people have a stronger simplicity preference for physical rather than social causal systems, suggesting that people use abstract expectations about causal texture to modulate their explanatory inferences. Together, we argue that these cues and contextual moderators act as powerful constraints that can help to specify the otherwise ill-defined problem of what distributions to use in Bayesian hypothesis comparison.</p

    Simplicity and complexity preferences in causal explanation: An opponent heuristic account

    Get PDF
    People often prefer simple to complex explanations because they generally have higher prior probability. However, simpler explanations are not always normatively superior because they often do not account for the data as well as complex explanations. How do people negotiate this trade-off between prior probability (favoring simple explanations) and goodness-of-fit (favoring complex explanations)? Here, we argue that people use opponent heuristics to simplify this problem—that people use simplicity as a cue to prior probability but complexity as a cue to goodness-of-fit. Study 1 finds direct evidence for this claim. In subsequent studies, we examine factors that lead one or the other heuristic to predominate in a given context. Studies 2 and 3 find that people have a stronger simplicity preference in deterministic rather than stochastic contexts, while Studies 4 and 5 find that people have a stronger simplicity preference for physical rather than social causal systems, suggesting that people use abstract expectations about causal texture to modulate their explanatory inferences. Together, we argue that these cues and contextual moderators act as powerful constraints that can help to specify the otherwise ill-defined problem of what distributions to use in Bayesian hypothesis comparison.</p

    Cognition as sense-making:An empirical enquiry

    Get PDF
    Humans must understand their world in order to act on it. I develop this premise into a set of empirical claims concerning the organization of the mind—namely, claims about strategies that people use to bring evidence to bear on hypotheses, and to harness those hypotheses for predicting the future and making choices. By isolating these sense-making strategies, we can study which faculties of mind share common cognitive machinery.My object in Chapter 1 is to make specific the claim that a common logic of explanation underlies diverse cognitive functions. In this dissertation, the empirical work focuses on causal inference and categorization—the core achievements of higher-order cognition—but there are rumblings throughout psychology, hinting that sense-making processes may be far more general. I explore some of these rumblings and hints.In Chapters 2–4, I get into the weeds of the biases that afflict our explanatory inferences—necessary side effects of the heuristics and strategies that make it possible. Chapter 2 looks at the inferred evidence strategy—a way that reasoners coordinate evidence with hypotheses. Chapter 3 examines our preferences for simple and for complex explanations, arguing that there are elements in explanatory logic favoring simplicity and elements favoring complexity—opponent heuristics which are tuned depending on contextual factors. Chapter 4 studies the aftermath of explanatory inferences—how such inferences are used to predict the future. I show that these inferences are not treated probabilistically, but digitally, as certainly true or false, leading to distortions in predictions.Chapter 5 considers the origins of these strategies. Given that children and adults are sometimes capable of sophisticated statistical intuition, might these heuristics be learned through repeated experiences with rational inference? Or might the converse be true, with our probabilistic machinery built atop an early-emerging heuristic foundation? I use the inferred evidence strategy as a case study to examine this question.Chapters 6 and 7 are concerned with how these processes propagate to social cognition and action. Chapter 6 studies how all three of these strategies and associated biases—inferred evidence, opponent simplicity heuristics, and digital prediction—enter into our stereotyping behavior and our mental-state inferences. Chapter 7 looks at how explanatory inferences influence our choices, again using inferred evidence as a case study. We shall find that choice contexts invoke processes that operate on top of explanatory inference, which can lead to choices that are simultaneously less biased but also more incoherent.In the concluding Chapter 8, I close with a meditation on the broader implications of this research program for human rationality and for probabilistic notions of rationality in particular. Even as our efforts to make sense of things can get us into trouble, they may be our only way of coping with the kinds of uncertainty we face in the world

    Belief digitization in economic prediction

    Get PDF
    Economic choices depend on our predictions of the future. Yet, at times predictions are not based on all relevant information, but instead on the single most likely possibility, which is treated as though certainly the case— that is, digitally. Two sets of studies test whether this digitization bias would occur in higher-stakes economic contexts. When making predictions about the future asset prices, participants ignored conditional probability information given relatively unlikely events and relied entirely on conditional probabilities given the more likely events. This effect was found for both financial aggregates and individual stocks, for binary predictions about the direction and continuous predictions about expected values, and even when the “unlikely” event explicitly had a probability as high as 30%; further, it was not moderated by investing experience. Implications for behavioral finance are discussed

    Cognition as sense-making:An empirical enquiry

    Get PDF
    Humans must understand their world in order to act on it. I develop this premise into a set of empirical claims concerning the organization of the mind—namely, claims about strategies that people use to bring evidence to bear on hypotheses, and to harness those hypotheses for predicting the future and making choices. By isolating these sense-making strategies, we can study which faculties of mind share common cognitive machinery.My object in Chapter 1 is to make specific the claim that a common logic of explanation underlies diverse cognitive functions. In this dissertation, the empirical work focuses on causal inference and categorization—the core achievements of higher-order cognition—but there are rumblings throughout psychology, hinting that sense-making processes may be far more general. I explore some of these rumblings and hints.In Chapters 2–4, I get into the weeds of the biases that afflict our explanatory inferences—necessary side effects of the heuristics and strategies that make it possible. Chapter 2 looks at the inferred evidence strategy—a way that reasoners coordinate evidence with hypotheses. Chapter 3 examines our preferences for simple and for complex explanations, arguing that there are elements in explanatory logic favoring simplicity and elements favoring complexity—opponent heuristics which are tuned depending on contextual factors. Chapter 4 studies the aftermath of explanatory inferences—how such inferences are used to predict the future. I show that these inferences are not treated probabilistically, but digitally, as certainly true or false, leading to distortions in predictions.Chapter 5 considers the origins of these strategies. Given that children and adults are sometimes capable of sophisticated statistical intuition, might these heuristics be learned through repeated experiences with rational inference? Or might the converse be true, with our probabilistic machinery built atop an early-emerging heuristic foundation? I use the inferred evidence strategy as a case study to examine this question.Chapters 6 and 7 are concerned with how these processes propagate to social cognition and action. Chapter 6 studies how all three of these strategies and associated biases—inferred evidence, opponent simplicity heuristics, and digital prediction—enter into our stereotyping behavior and our mental-state inferences. Chapter 7 looks at how explanatory inferences influence our choices, again using inferred evidence as a case study. We shall find that choice contexts invoke processes that operate on top of explanatory inference, which can lead to choices that are simultaneously less biased but also more incoherent.In the concluding Chapter 8, I close with a meditation on the broader implications of this research program for human rationality and for probabilistic notions of rationality in particular. Even as our efforts to make sense of things can get us into trouble, they may be our only way of coping with the kinds of uncertainty we face in the world
    corecore