421 research outputs found

    Concepts and Their Dynamics: A Quantum-Theoretic Modeling of Human Thought

    Full text link
    We analyze different aspects of our quantum modeling approach of human concepts, and more specifically focus on the quantum effects of contextuality, interference, entanglement and emergence, illustrating how each of them makes its appearance in specific situations of the dynamics of human concepts and their combinations. We point out the relation of our approach, which is based on an ontology of a concept as an entity in a state changing under influence of a context, with the main traditional concept theories, i.e. prototype theory, exemplar theory and theory theory. We ponder about the question why quantum theory performs so well in its modeling of human concepts, and shed light on this question by analyzing the role of complex amplitudes, showing how they allow to describe interference in the statistics of measurement outcomes, while in the traditional theories statistics of outcomes originates in classical probability weights, without the possibility of interference. The relevance of complex numbers, the appearance of entanglement, and the role of Fock space in explaining contextual emergence, all as unique features of the quantum modeling, are explicitly revealed in this paper by analyzing human concepts and their dynamics.Comment: 31 pages, 5 figure

    Can quantum probability provide a new direction for cognitive modeling?

    Get PDF
    Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard to even imagine alternative ways to formalize probabilities. Yet, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (a) there are many well established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) which are hard to reconcile with CP principles; and (b) these same findings have natural and straightforward accounts with quantum principles. In QP theory, probabilistic assessment is often strongly context and order dependent, individual states can be superposition states (which are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. Yet our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings which motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality

    Does normal processing provide evidence of specialised semantic subsystems?

    Get PDF
    Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative influences of similarity and semantic category. We created novel examples of living and non-living things so category and similarity could be manipulated independently. Pre-tests ensured that our images evoked appropriate semantic information and were matched for familiarity. Participants were trained to associate names with the images and then performed a name-verification task under two levels of time pressure. We found no significant advantage for living things alongside strong effects of similarity. Our results suggest that similarity rather than category is the key determinant of speed and accuracy in normal semantic processing. We discuss the implications of this finding for neuropsychological studies. © 2005 Psychology Press Ltd

    Feature integration in natural language concepts

    Get PDF
    Two experiments measured the joint influence of three key sets of semantic features on the frequency with which artifacts (Experiment 1) or plants and creatures (Experiment 2) were categorized in familiar categories. For artifacts, current function outweighed both originally intended function and current appearance. For biological kinds, appearance and behavior, an inner biological function, and appearance and behavior of offspring all had similarly strong effects on categorization. The data were analyzed to determine whether an independent cue model or an interactive model best accounted for how the effects of the three feature sets combined. Feature integration was found to be additive for artifacts but interactive for biological kinds. In keeping with this, membership in contrasting artifact categories tended to be superadditive, indicating overlapping categories, whereas for biological kinds, it was subadditive, indicating conceptual gaps between categories. It is argued that the results underline a key domain difference between artifact and biological concepts
    corecore