17,254 research outputs found
Symbolic Probabilistic Inference with Evidence Potential
Recent research on the Symbolic Probabilistic Inference (SPI) algorithm[2]
has focused attention on the importance of resolving general queries in
Bayesian networks. SPI applies the concept of dependency-directed backward
search to probabilistic inference, and is incremental with respect to both
queries and observations. In response to this research we have extended the
evidence potential algorithm [3] with the same features. We call the extension
symbolic evidence potential inference (SEPI). SEPI like SPI can handle generic
queries and is incremental with respect to queries and observations. While in
SPI, operations are done on a search tree constructed from the nodes of the
original network, in SEPI, a clique-tree structure obtained from the evidence
potential algorithm [3] is the basic framework for recursive query processing.
In this paper, we describe the systematic query and caching procedure of SEPI.
SEPI begins with finding a clique tree from a Bayesian network-the standard
procedure of the evidence potential algorithm. With the clique tree, various
probability distributions are computed and stored in each clique. This is the
?pre-processing? step of SEPI. Once this step is done, the query can then be
computed. To process a query, a recursive process similar to the SPI algorithm
is used. The queries are directed to the root clique and decomposed into
queries for the clique's subtrees until a particular query can be answered at
the clique at which it is directed. The algorithm and the computation are
simple. The SEPI algorithm will be presented in this paper along with several
examples.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991
Symbolic Probabilitistic Inference in Large BN2O Networks
A BN2O network is a two level belief net in which the parent interactions are
modeled using the noisy-or interaction model. In this paper we discuss
application of the SPI local expression language to efficient inference in
large BN2O networks. In particular, we show that there is significant
structure, which can be exploited to improve over the Quickscore result. We
further describe how symbolic techniques can provide information which can
significantly reduce the computation required for computing all cause posterior
marginals. Finally, we present a novel approximation technique with preliminary
experimental results.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
Exact Inference for Relational Graphical Models with Interpreted Functions: Lifted Probabilistic Inference Modulo Theories
Probabilistic Inference Modulo Theories (PIMT) is a recent framework that
expands exact inference on graphical models to use richer languages that
include arithmetic, equalities, and inequalities on both integers and real
numbers. In this paper, we expand PIMT to a lifted version that also processes
random functions and relations. This enhancement is achieved by adapting
Inversion, a method from Lifted First-Order Probabilistic Inference literature,
to also be modulo theories. This results in the first algorithm for exact
probabilistic inference that efficiently and simultaneously exploits random
relations and functions, arithmetic, equalities and inequalities.Comment: Appeared in the Uncertainty in Artificial Intelligence Conference,
August 201
SIMPL: A DSL for Automatic Specialization of Inference Algorithms
Inference algorithms in probabilistic programming languages (PPLs) can be
thought of as interpreters, since an inference algorithm traverses a model
given evidence to answer a query. As with interpreters, we can improve the
efficiency of inference algorithms by compiling them once the model, evidence
and query are known. We present SIMPL, a domain specific language for inference
algorithms, which uses this idea in order to automatically specialize annotated
inference algorithms. Due to the approach of specialization, unlike a
traditional compiler, with SIMPL new inference algorithms can be added easily,
and still be optimized using domain-specific information. We evaluate SIMPL and
show that partial evaluation gives a 2-6x speedup, caching provides an
additional 1-1.5x speedup, and generating C code yields an additional 13-20x
speedup, for an overall speedup of 30-150x for several inference algorithms and
models
Probability Distinguishes Different Types of Conditional Statements
The language of probability is used to define several different types of
conditional statements. There are four principal types: subjunctive, material,
existential, and feasibility. Two further types of conditionals are defined
using the propositional calculus and Boole's mathematical logic:
truth-functional and Boolean feasibility (which turn out to be special cases of
probabilistic conditionals). Each probabilistic conditional is quantified by a
fractional parameter between zero and one that says whether it is purely
affirmative, purely negative, or intermediate in its sense. Conditionals can be
specialized further by their content to express factuality and
counterfactuality, and revised or reformulated to account for exceptions and
confounding factors. The various conditionals have distinct mathematical
representations: through intermediate probability expressions and logical
formulas, each conditional is eventually translated into a set of polynomial
equations and inequalities (with real coefficients). The polynomial systems
from different types of conditionals exhibit different patterns of behavior,
concerning for example opposing conditionals or false antecedents. Interesting
results can be computed from the relevant polynomial systems using well-known
methods from algebra and computer science. Among other benefits, the proposed
framework of analysis offers paraconsistent procedures for logical deduction
that produce such familiar results as modus ponens, transitivity, disjunction
introduction, and disjunctive syllogism; all while avoiding any explosion of
consequences from inconsistent premises. Several example problems from Goodman
and Adams are analyzed. A new perspective called polylogicism is presented:
mathematical logic that respects the diversity among conditionals in particular
and logic problems in general.Comment: Fixed a few typographical error
Predicting upcoming actions by observation: some facts, models and challenges
Predicting another person's upcoming action to build an appropriate response
is a regular occurrence in the domain of motor control. In this review we
discuss conceptual and experimental approaches aiming at the neural basis of
predicting and learning to predict upcoming movements by their observation
An Empirical Evaluation of Possible Variations of Lazy Propagation
As real-world Bayesian networks continue to grow larger and more complex, it
is important to investigate the possibilities for improving the performance of
existing algorithms of probabilistic inference. Motivated by examples, we
investigate the dependency of the performance of Lazy propagation on the
message computation algorithm. We show how Symbolic Probabilistic Inference
(SPI) and Arc-Reversal (AR) can be used for computation of clique to clique
messages in the addition to the traditional use of Variable Elimination (VE).
In addition, the paper resents the results of an empirical evaluation of the
performance of Lazy propagation using VE, SPI, and AR as the message
computation algorithm. The results of the empirical evaluation show that for
most networks, the performance of inference did not depend on the choice of
message computation algorithm, but for some randomly generated networks the
choice had an impact on both space and time performance. In the cases where the
choice had an impact, AR produced the best results.Comment: Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004
Constrained Bayesian Networks: Theory, Optimization, and Applications
We develop the theory and practice of an approach to modelling and
probabilistic inference in causal networks that is suitable when
application-specific or analysis-specific constraints should inform such
inference or when little or no data for the learning of causal network
structure or probability values at nodes are available. Constrained Bayesian
Networks generalize a Bayesian Network such that probabilities can be symbolic,
arithmetic expressions and where the meaning of the network is constrained by
finitely many formulas from the theory of the reals. A formal semantics for
constrained Bayesian Networks over first-order logic of the reals is given,
which enables non-linear and non-convex optimisation algorithms that rely on
decision procedures for this logic, and supports the composition of several
constrained Bayesian Networks. A non-trivial case study in arms control, where
few or no data are available to assess the effectiveness of an arms inspection
process, evaluates our approach. An open-access prototype implementation of
these foundations and their algorithms uses the SMT solver Z3 as decision
procedure, leverages an open-source package for Bayesian inference to symbolic
computation, and is evaluated experimentally.Comment: 43 pages, 18 figure
Neural-Symbolic Learning and Reasoning: A Survey and Interpretation
The study and understanding of human behaviour is relevant to computer
science, artificial intelligence, neural computation, cognitive science,
philosophy, psychology, and several other areas. Presupposing cognition as
basis of behaviour, among the most prominent tools in the modelling of
behaviour are computational-logic systems, connectionist models of cognition,
and models of uncertainty. Recent studies in cognitive science, artificial
intelligence, and psychology have produced a number of cognitive models of
reasoning, learning, and language that are underpinned by computation. In
addition, efforts in computer science research have led to the development of
cognitive computational systems integrating machine learning and automated
reasoning. Such systems have shown promise in a range of applications,
including computational biology, fault diagnosis, training and assessment in
simulators, and software verification. This joint survey reviews the personal
ideas and views of several researchers on neural-symbolic learning and
reasoning. The article is organised in three parts: Firstly, we frame the scope
and goals of neural-symbolic computation and have a look at the theoretical
foundations. We then proceed to describe the realisations of neural-symbolic
computation, systems, and applications. Finally we present the challenges
facing the area and avenues for further research.Comment: 58 pages, work in progres
Embedding Vector Differences Can Be Aligned With Uncertain Intensional Logic Differences
The DeepWalk algorithm is used to assign embedding vectors to nodes in the
Atomspace weighted, labeled hypergraph that is used to represent knowledge in
the OpenCog AGI system, in the context of an application to probabilistic
inference regarding the causes of longevity based on data from biological
ontologies and genomic analyses. It is shown that vector difference operations
between embedding vectors are, in appropriate conditions, approximately
alignable with "intensional difference" operations between the hypergraph nodes
corresponding to the embedding vectors. This relationship hints at a broader
functorial mapping between uncertain intensional logic and vector arithmetic,
and opens the door for using embedding vector algebra to guide intensional
inference control
- …