310 research outputs found
Modelling contextuality by probabilistic programs with hypergraph semantics
Models of a phenomenon are often developed by examining it under different
experimental conditions, or measurement contexts. The resultant probabilistic
models assume that the underlying random variables, which define a measurable
set of outcomes, can be defined independent of the measurement context. The
phenomenon is deemed contextual when this assumption fails. Contextuality is an
important issue in quantum physics. However, there has been growing speculation
that it manifests outside the quantum realm with human cognition being a
particularly prominent area of investigation. This article contributes the
foundations of a probabilistic programming language that allows convenient
exploration of contextuality in wide range of applications relevant to
cognitive science and artificial intelligence. Specific syntax is proposed to
allow the specification of "measurement contexts". Each such context delivers a
partial model of the phenomenon based on the associated experimental condition
described by the measurement context. The probabilistic program is translated
into a hypergraph in a modular way. Recent theoretical results from the field
of quantum physics show that contextuality can be equated with the possibility
of constructing a probabilistic model on the resulting hypergraph. The use of
hypergraphs opens the door for a theoretically succinct and efficient
computational semantics sensitive to modelling both contextual and
non-contextual phenomena. Finally, this article raises awareness of
contextuality beyond quantum physics and to contribute formal methods to detect
its presence by means of hypergraph semantics.Comment: Accepted for "Theoretical Computer Science
Analysing Ambiguous Nouns and Verbs with Quantum Contextuality Tools
Psycholinguistic research uses eye-tracking to show that polysemous words are disambiguated differently from homonymous words, and that ambiguous verbs are disambiguated differently than ambiguous nouns. Research in Compositional Distributional Semantics uses cosine distances to show that verbs are disambiguated more efficiently in the context of their subjects and objects than when on their own. These two frameworks both focus on one ambiguous word at a time and neither considers ambiguous phrases with two (or more) ambiguous words. We borrow methods and measures from Quantum Information Theory, the framework of Contextuality-by-Default and degrees of contextual influences, and work with ambiguous subject-verb and verb-object phrases of English, where both the subject/object and the verb are ambiguous. We show that differences in the processing of ambiguous verbs versus ambiguous nouns, as well as between different levels of ambiguity in homonymous versus polysemous nouns and verbs can be modelled using the averages of the degrees of their contextual influences
Recommended from our members
Investigation and Modelling of Quantum-like User Cognitive Behaviour in Information Access and Retrieval
This thesis is fundamentally about using conceptual and mathematical constructs from the area of Quantum Theory in Information Retrieval (IR). The need and motivation for this is two-fold – firstly, it has been increasingly shown in decision sciences that human decision-making does not always conform to the norms of traditional probability and logic framework. The quantum framework offers a generalised probability and logic framework which can model decisions or judgements under dynamic context and ambiguity. Secondly, there is a need in IR for theories and models which improve our understanding of user behaviour. Hence it is worth exploring the combination of the quantum framework and IR, especially focused on the user aspects of IR. The overarching research question is whether there is evidence of user behaviour in IR scenarios which warrants the need for a quantum based approach by way of showing the limitations of the traditional (classical) approach. The methodology involves analysing data to detect quantum-like phenomena of interference, contextuality, incompatibility, etc. from two common data sources in IR–standard datasets like query log data, and through crowdsourced user studies designed similar to some quantum physics or cognitive science experiments. While the evidence of quantum-like phenomena from standard datasets is not convincing, we find that some of the user studies reveal the quantum-like structure of document judgements. One of the key findings which has implication for IR is the dynamic interactions between the different dimensions of relevance. For example, a user’s judgement of reliability of a document depends significantly on whether they found it understandable or not. Thus, the consideration of one relevance dimension or document feature can provide a context for another dimension, contrary to the current IR models which consider these features to be independent of each other and an objective property of the document. The quantum framework has been especially designed to deal with such scenarios where properties of systems or objects do not exist independent of measurement context. The thesis concludes with suggestions about incorporating quantum mathematical constructs into state-of-art IR algorithms
Complexity: against systems
Abstract This article assumes a specific intuitive notion of complexity as a difficulty to generate and/or assess the plausibility of models. Based on this intuitive understanding of complexity, it identifies two main causes of complexity, namely, radical openness and contextuality. The former is the idea that there are no natural systems. The modeler always needs to draw artificial boundaries around phenomena to generate feasible models. Contextuality is intimately connected to the requirement to simplify models and to leave out most aspects. Complexity occurs when contextuality and radical openness cannot be contained that is when it is not clear where the boundaries of the system are and which abstractions are the correct ones. This concept of complexity is illustrated using a number of example from evolution
Second Generation General System Theory: Perspectives in Philosophy and Approaches in Complex Systems
Following the classical work of Norbert Wiener, Ross Ashby, Ludwig von Bertalanffy and many others, the concept of System has been elaborated in different disciplinary fields, allowing interdisciplinary approaches in areas such as Physics, Biology, Chemistry, Cognitive Science, Economics, Engineering, Social Sciences, Mathematics, Medicine, Artificial Intelligence, and Philosophy. The new challenge of Complexity and Emergence has made the concept of System even more relevant to the study of problems with high contextuality. This Special Issue focuses on the nature of new problems arising from the study and modelling of complexity, their eventual common aspects, properties and approaches—already partially considered by different disciplines—as well as focusing on new, possibly unitary, theoretical frameworks. This Special Issue aims to introduce fresh impetus into systems research when the possible detection and correction of mistakes require the development of new knowledge. This book contains contributions presenting new approaches and results, problems and proposals. The context is an interdisciplinary framework dealing, in order, with electronic engineering problems; the problem of the observer; transdisciplinarity; problems of organised complexity; theoretical incompleteness; design of digital systems in a user-centred way; reaction networks as a framework for systems modelling; emergence of a stable system in reaction networks; emergence at the fundamental systems level; behavioural realization of memoryless functions
Natural Language Syntax Complies with the Free-Energy Principle
Natural language syntax yields an unbounded array of hierarchically
structured expressions. We claim that these are used in the service of active
inference in accord with the free-energy principle (FEP). While conceptual
advances alongside modelling and simulation work have attempted to connect
speech segmentation and linguistic communication with the FEP, we extend this
program to the underlying computations responsible for generating syntactic
objects. We argue that recently proposed principles of economy in language
design - such as "minimal search" criteria from theoretical syntax - adhere to
the FEP. This affords a greater degree of explanatory power to the FEP - with
respect to higher language functions - and offers linguistics a grounding in
first principles with respect to computability. We show how both tree-geometric
depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression
algorithm) can be used to accurately predict legal operations on syntactic
workspaces, directly in line with formulations of variational free energy
minimization. This is used to motivate a general principle of language design
that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of
linguists with the normative account of self-organization furnished by the FEP,
by marshalling evidence from theoretical linguistics and psycholinguistics to
ground core principles of efficient syntactic computation within active
inference
- …