33 research outputs found

    Re-visions of rationality?

    Get PDF
    Empirical evidence suggests proponents of the ‘adaptive toolbox’ framework of human judgment need to rethink their vision of rationality

    Cognitive processes, models and metaphors in decision research

    Get PDF
    Decision research in psychology has traditionally been influenced by the homo oeconomicus metaphor with its emphasis on normative models and deviations from the predictions of those models. In contrast, the principal metaphor of cognitive psychology conceptualizes humans as ‘information processors’, employing processes of perception, memory, categorization, problem solving and so on. Many of the processes described in cognitive theories are similar to those involved in decision making, and thus increasing cross-fertilization between the two areas is an important endeavour. A wide range of models and metaphors has been proposed to explain and describe ‘information processing ’ and many models have been applied to decision making in ingenious ways. This special issue encourages cross-fertilization between cognitive psychology and decision research by providing an overview of current perspectives in one area that continues to highlight the benefits of the synergistic approach: cognitive modeling of multi-attribute decision making. In this introduction we discuss aspects of the cognitive system that need to be considered when modeling multi-attribute decision making (e.g., automatic versus controlled processing, learning and memory constraints, metacognition) and illustrate how such aspects are incorporated into the approaches proposed by contributors to the special issue. We end by discussing the challenges posed by the contrasting and sometimes incompatible assumptions of the models and metaphors

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision's context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute’s subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision’s context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities

    Architectural process models of decision making: Towards a model database

    Get PDF
    We present the project aimed at creating a database of detailed architectural process models of memory-based decision models. Those models are implemented in the cognitive architecture ACT-R. In creating this database, we have identified commonalities and differences of various decision models in the literature. The model database can provide insights into the interrelation among decision models and can be used in future research to address debates on inferences from memory, which are hard to resolve without specifying the processing steps at the level of precision that a cognitive architecture provides

    Concepts as Pluralistic Hybrids

    Get PDF
    In contrast to earlier views that argued for a particular kind of concept (e.g. prototypes), several recent accounts have proposed that there are multiple distinct kinds of concepts, or that there is a plurality of concepts for each category. In this paper, I argue for a novel account of concepts as pluralistic hybrids. According to this view, concepts are pluralistic because there are several concepts for the same category whose use is heavily determined by context. In addition, concepts are hybrids because they typically link together several different kinds of information that are used in the same cognitive processes. This alternative view accounts for the available empirical data, allows for greater cognitive flexibility than Machery’s recent account, and overcomes several objections to traditional hybrid views

    Concepts as Pluralistic Hybrids

    Get PDF
    In contrast to earlier views that argued for a particular kind of concept (e.g. prototypes), several recent accounts have proposed that there are multiple distinct kinds of concepts, or that there is a plurality of concepts for each category. In this paper, I argue for a novel account of concepts as pluralistic hybrids. According to this view, concepts are pluralistic because there are several concepts for the same category whose use is heavily determined by context. In addition, concepts are hybrids because they typically link together several different kinds of information that are used in the same cognitive processes. This alternative view accounts for the available empirical data, allows for greater cognitive flexibility than Machery’s recent account, and overcomes several objections to traditional hybrid views

    The autocorrelated Bayesian sampler : a rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times

    Get PDF
    Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the “Bayesian brain” operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise

    Improving and extending models of quantitative judgments

    Full text link
    How fast is this car approaching? What is the robability that it will rain today? How severe are the symptoms of this patient? Such quantitative judgments require nferring a continuous criterion from a number of cues or features of the judgment object (e.g., the color of the clouds). Judgments such as these are a central cognitive process which guides our decisions and behavior in our everyday life. For over half a century, researchers are investigating how people make such judgments, which information they rely on, how they combine different types of information, and how the environment or the task affect the processes underlying these judgments by using computational models of the theorized cognitive process. It is the goal of my thesis to improve and extend these models of quantitative judgments. In three articles, I implement and test improved state-of-the art versions of existing models, highlight and solve issues in the way these models are currently used, and extend the scope and possibilities of these models of quantitative judgments. In the first manuscript, I develop, test, and apply a hierarchical Bayesian version of the RulEx-J model, which is used to measure the relative contribution of rule- and exemplar-based processes in people’s judgments. The manuscript shows that the Bayesian RulEx-J model allows to estimate parameters more accurately and how it can be used to test hypotheses about latent parameters. The second manuscript shows that the current practice of not differentiating between direct retrieval of a trained exemplar and genuine judgments in the responses of participants leads to a biased estimation of parameters and reduced fit of exemplar-models. The manuscript also presents a solution to this problem by introducing a latent-mixture extended exemplar model which integrates a direct-recall process of trained exemplars. In the third manuscript, I demonstrate how to model people’s judgments of even complex and realistic stimuli by extracting the necessary cues from pairwise similarity ratings. In sum, the results of the three manuscripts described here contribute to the model-based study of the cognitive processes underlying people’s judgments. By implementing state-of-the-art methods, improving upon current practices, and broadening the scope of the existing research, the results reported in this thesis add to the development, testing, and application of theories of quantitative judgments

    On heuristic and linear models of judgment: Mapping the demand for knowledge

    Get PDF
    Research on judgment and decision making presents a confusing picture of human abilities. For example, much research has emphasized the dysfunctional aspects of judgmental heuristics, and yet, other findings suggest that these can be highly effective. A further line of research has modeled judgment as resulting from “as if” linear models. This paper illuminates the distinctions in these approaches by providing a common analytical framework based on the central theoretical premise that understanding human performance requires specifying how characteristics of the decision rules people use interact with the demands of the tasks they face. Our work synthesizes the analytical tools of “lens model” research with novel methodology developed to specify the effectiveness of heuristics in different environments and allows direct comparisons between the different approaches. We illustrate with both theoretical analyses and simulations. We further link our results to the empirical literature by a meta-analysis of lens model studies and estimate both human and heuristic performance in the same tasks. Our results highlight the trade-off between linear models and heuristics. Whereas the former are cognitively demanding, the latter are simple to use. However, they require knowledge – and thus “maps” – of when and which heuristic to employ.Decision making; heuristics; linear models; lens model; judgmental biases
    corecore