278 research outputs found

    Residual vectors for Alzheimer disease diagnosis and prognostication

    Get PDF
    Alzheimer disease (AD) is an increasingly prevalent neurodegenerative condition and a looming socioeconomic threat. A biomarker for the disease could make the process of diagnosis easier and more accurate, and accelerate drug discovery. The current work describes a method for scoring brain images that is inspired by fundamental principles from information retrieval (IR), a branch of computer science that includes the development of Internet search engines. For this research, a dataset of 254 baseline 18-F fluorodeoxyglucose positron emission tomography (FDG-PET) scans was obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI). For a given contrast, a subset of scans (nine of every 10) was used to compute a residual vector that typified the difference, at each voxel, between the two groups being contrasted. Scans that were not used for computing the residual vector (the remaining one of 10 scans) were then compared to the residual vector using a cosine similarity metric. This process was repeated sequentially, each time generating cosine similarity scores on 10% of the FDG-PET scans for each contrast. Statistical analysis revealed that the scores were significant predictors of functional decline as measured by the Functional Activities Questionnaire (FAQ). When logistic regression models that incorporated these scores were evaluated with leave-one-out cross-validation, cognitively normal controls were discerned from AD with sensitivity and specificity of 94.4% and 84.8%, respectively. Patients who converted from mild cognitive impairment (MCI) to AD were discerned from MCI nonconverters with sensitivity and specificity of 89.7% and 62.9%, respectively, when FAQ scores were brought into the model. Residual vectors are easy to compute and provide a simple method for scoring the similarity between an FDG-PET scan and sets of examples from a given diagnostic group. The method is readily generalizable to any imaging modality. Further interdisciplinary work between IR and clinical neuroscience is warranted

    Concepts and Their Dynamics: A Quantum-Theoretic Modeling of Human Thought

    Full text link
    We analyze different aspects of our quantum modeling approach of human concepts, and more specifically focus on the quantum effects of contextuality, interference, entanglement and emergence, illustrating how each of them makes its appearance in specific situations of the dynamics of human concepts and their combinations. We point out the relation of our approach, which is based on an ontology of a concept as an entity in a state changing under influence of a context, with the main traditional concept theories, i.e. prototype theory, exemplar theory and theory theory. We ponder about the question why quantum theory performs so well in its modeling of human concepts, and shed light on this question by analyzing the role of complex amplitudes, showing how they allow to describe interference in the statistics of measurement outcomes, while in the traditional theories statistics of outcomes originates in classical probability weights, without the possibility of interference. The relevance of complex numbers, the appearance of entanglement, and the role of Fock space in explaining contextual emergence, all as unique features of the quantum modeling, are explicitly revealed in this paper by analyzing human concepts and their dynamics.Comment: 31 pages, 5 figure

    Experimental Evidence for Quantum Structure in Cognition

    Full text link
    We proof a theorem that shows that a collection of experimental data of membership weights of items with respect to a pair of concepts and its conjunction cannot be modeled within a classical measure theoretic weight structure in case the experimental data contain the effect called overextension. Since the effect of overextension, analogue to the well-known guppy effect for concept combinations, is abundant in all experiments testing weights of items with respect to pairs of concepts and their conjunctions, our theorem constitutes a no-go theorem for classical measure structure for common data of membership weights of items with respect to concepts and their combinations. We put forward a simple geometric criterion that reveals the non classicality of the membership weight structure and use experimentally measured membership weights estimated by subjects in experiments to illustrate our geometrical criterion. The violation of the classical weight structure is similar to the violation of the well-known Bell inequalities studied in quantum mechanics, and hence suggests that the quantum formalism and hence the modeling by quantum membership weights can accomplish what classical membership weights cannot do.Comment: 12 pages, 3 figure

    Ideologies and their points of view

    Full text link
    © Springer International Publishing Switzerland 2016. It is well known that different arguments appeal to different people. We all process information in ways that are adapted to be consistent with our underlying ideologies. These ideologies can sometimes be framed in terms of particular axes or dimensions, which makes it possible to represent some aspects of an ideology as a region in the kind of vector space that is typical of many generalised quantum models. Such models can then be used to explain and predict, in broad strokes, whether a particular argument or proposal is likely to appeal to an individual with a particular ideology. The choice of suitable arguments to bring about desired actions is traditionally part of the art or science of rhetoric, and today’s highly polarised society means that this skill is becoming more important than ever. This paper presents a basic model for understanding how different goals will appeal to people with different ideologies, and thus how different rhetorical positions can be adopted to promote the same desired outcome. As an example, we consider different narratives and hence actions with respect to the environment and climate change, an important but currently highly controversial topic

    Quantum Mathematics in Artificial Intelligence

    Full text link
    In the decade since 2010, successes in artificial intelligence have been at the forefront of computer science and technology, and vector space models have solidified a position at the forefront of artificial intelligence. At the same time, quantum computers have become much more powerful, and announcements of major advances are frequently in the news. The mathematical techniques underlying both these areas have more in common than is sometimes realized. Vector spaces took a position at the axiomatic heart of quantum mechanics in the 1930s, and this adoption was a key motivation for the derivation of logic and probability from the linear geometry of vector spaces. Quantum interactions between particles are modelled using the tensor product, which is also used to express objects and operations in artificial neural networks. This paper describes some of these common mathematical areas, including examples of how they are used in artificial intelligence (AI), particularly in automated reasoning and natural language processing (NLP). Techniques discussed include vector spaces, scalar products, subspaces and implication, orthogonal projection and negation, dual vectors, density matrices, positive operators, and tensor products. Application areas include information retrieval, categorization and implication, modelling word-senses and disambiguation, inference in knowledge bases, and semantic composition. Some of these approaches can potentially be implemented on quantum hardware. Many of the practical steps in this implementation are in early stages, and some are already realized. Explaining some of the common mathematical tools can help researchers in both AI and quantum computing further exploit these overlaps, recognizing and exploring new directions along the way.Comment: Adding journal reference, recommended by JAIR editors upon publicatio

    Meaning-focused and Quantum-inspired Information Retrieval

    Full text link
    In recent years, quantum-based methods have promisingly integrated the traditional procedures in information retrieval (IR) and natural language processing (NLP). Inspired by our research on the identification and application of quantum structures in cognition, more specifically our work on the representation of concepts and their combinations, we put forward a 'quantum meaning based' framework for structured query retrieval in text corpora and standardized testing corpora. This scheme for IR rests on considering as basic notions, (i) 'entities of meaning', e.g., concepts and their combinations and (ii) traces of such entities of meaning, which is how documents are considered in this approach. The meaning content of these 'entities of meaning' is reconstructed by solving an 'inverse problem' in the quantum formalism, consisting of reconstructing the full states of the entities of meaning from their collapsed states identified as traces in relevant documents. The advantages with respect to traditional approaches, such as Latent Semantic Analysis (LSA), are discussed by means of concrete examples.Comment: 11 page
    corecore