14 research outputs found
Picturing classical and quantum Bayesian inference
We introduce a graphical framework for Bayesian inference that is
sufficiently general to accommodate not just the standard case but also recent
proposals for a theory of quantum Bayesian inference wherein one considers
density operators rather than probability distributions as representative of
degrees of belief. The diagrammatic framework is stated in the graphical
language of symmetric monoidal categories and of compact structures and
Frobenius structures therein, in which Bayesian inversion boils down to
transposition with respect to an appropriate compact structure. We characterize
classical Bayesian inference in terms of a graphical property and demonstrate
that our approach eliminates some purely conventional elements that appear in
common representations thereof, such as whether degrees of belief are
represented by probabilities or entropic quantities. We also introduce a
quantum-like calculus wherein the Frobenius structure is noncommutative and
show that it can accommodate Leifer's calculus of `conditional density
operators'. The notion of conditional independence is also generalized to our
graphical setting and we make some preliminary connections to the theory of
Bayesian networks. Finally, we demonstrate how to construct a graphical
Bayesian calculus within any dagger compact category.Comment: 38 pages, lots of picture
The lesson of causal discovery algorithms for quantum correlations: Causal explanations of Bell-inequality violations require fine-tuning
An active area of research in the fields of machine learning and statistics
is the development of causal discovery algorithms, the purpose of which is to
infer the causal relations that hold among a set of variables from the
correlations that these exhibit. We apply some of these algorithms to the
correlations that arise for entangled quantum systems. We show that they cannot
distinguish correlations that satisfy Bell inequalities from correlations that
violate Bell inequalities, and consequently that they cannot do justice to the
challenges of explaining certain quantum correlations causally. Nonetheless, by
adapting the conceptual tools of causal inference, we can show that any attempt
to provide a causal explanation of nonsignalling correlations that violate a
Bell inequality must contradict a core principle of these algorithms, namely,
that an observed statistical independence between variables should not be
explained by fine-tuning of the causal parameters. In particular, we
demonstrate the need for such fine-tuning for most of the causal mechanisms
that have been proposed to underlie Bell correlations, including superluminal
causal influences, superdeterminism (that is, a denial of freedom of choice of
settings), and retrocausal influences which do not introduce causal cycles.Comment: 29 pages, 28 figs. New in v2: a section presenting in detail our
characterization of Bell's theorem as a contradiction arising from (i) the
framework of causal models, (ii) the principle of no fine-tuning, and (iii)
certain operational features of quantum theory; a section explaining why a
denial of hidden variables affords even fewer opportunities for causal
explanations of quantum correlation
Generalized belief change with imprecise probabilities and graphical models
We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored
Knowledge representation and diagnostic inference using Bayesian networks in the medical discourse
For the diagnostic inference under uncertainty Bayesian networks are
investigated. The method is based on an adequate uniform representation of the
necessary knowledge. This includes both generic and experience-based specific
knowledge, which is stored in a knowledge base. For knowledge processing, a
combination of the problem-solving methods of concept-based and case-based
reasoning is used. Concept-based reasoning is used for the diagnosis, therapy
and medication recommendation and evaluation of generic knowledge. Exceptions
in the form of specific patient cases are processed by case-based reasoning. In
addition, the use of Bayesian networks allows to deal with uncertainty,
fuzziness and incompleteness. Thus, the valid general concepts can be issued
according to their probability. To this end, various inference mechanisms are
introduced and subsequently evaluated within the context of a developed
prototype. Tests are employed to assess the classification of diagnoses by the
network
Recommended from our members
Formalizing graphical notations
The thesis describes research into graphical notations for software engineering, with a principal interest in ways of formalizing them. The research seeks to provide a theoretical basis that will help in designing both notations and the software tools that process them.
The work starts from a survey of literature on notation, followed by a review of techniques for formal description and for computational handling of notations. The survey concentrates on collecting views of the benefits and the problems attending notation use in software development; the review covers picture description languages, grammars and tools such as generic editors and visual programming environments. The main problem of notation is found to be a lack of any coherent, rigorous description methods. The current approaches to this problem are analysed as lacking in consensus on syntax specification and also lacking a clear focus on a defined concept of notated expression.
To address these deficiencies, the thesis embarks upon an exploration of serniotic, linguistic and logical theory; this culminates in a proposed formalization of serniosis in notations, using categorial model theory as a mathematical foundation. An argument about the structure of sign systems leads to an analysis of notation into a layered system of tractable theories, spanning the gap between expressive pictorial medium and subject domain. This notion of 'tectonic' theory aims to treat both diagrams and formulae together.
The research gives details of how syntactic structure can be sketched in a mathematical sense, with examples applying to software development diagrams, offering a new solution to the problem of notation specification. Based on these methods, the thesis discusses directions for resolving the harder problems of supporting notation design, processing and computer-aided generic editing. A number of future research areas are thereby opened up. For practical trial of the ideas, the work proceeds to the development and partial implementation of a system to aid the design of notations and editors. Finally the thesis is evaluated as a contribution to theory in an area which has not attracted a standard approach
A probabilistic reasoning and learning system based on Bayesian belief networks
SIGLEAvailable from British Library Document Supply Centre- DSC:DX173015 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
Robustness in Bayesian networks
This thesis explores the robustness of large discrete Bayesian networks (BNs) when applied in decision support systems which have a pre-specified subset of target variables. We develop new methodology, underpinned by the total variation distance, to determine whether simplifications which are currently employed in the practical implementation of such systems are theoretically valid. This versatile framework enables us to study the effects of misspecification within a Bayesian network (BN), and also extend the methodology to quantify temporal effects within Dynamic BNs. Unlike current robustness analyses, our new technology can be applied throughout the construction of the BN model; enabling us to create tailored, bespoke models. For illustrative purposes we shall be applying our work to the field of Food Security and a demonstrative ecological network