2,057 research outputs found
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
We propose a logical/mathematical framework for statistical parameter
learning of parameterized logic programs, i.e. definite clause programs
containing probabilistic facts with a parameterized distribution. It extends
the traditional least Herbrand model semantics in logic programming to
distribution semantics, possible world semantics with a probability
distribution which is unconditionally applicable to arbitrary logic programs
including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM
algorithm, the graphical EM algorithm, that runs for a class of parameterized
logic programs representing sequential decision processes where each decision
is exclusive and independent. It runs on a new data structure called support
graphs describing the logical relationship between observations and their
explanations, and learns parameters by computing inside and outside probability
generalized for logic programs. The complexity analysis shows that when
combined with OLDT search for all explanations for observations, the graphical
EM algorithm, despite its generality, has the same time complexity as existing
EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside
algorithm for PCFGs, and the one for singly connected Bayesian networks that
have been developed independently in each research field. Learning experiments
with PCFGs using two corpora of moderate size indicate that the graphical EM
algorithm can significantly outperform the Inside-Outside algorithm
Bayesian quantification for coherent anti-Stokes Raman scattering spectroscopy
We propose a Bayesian statistical model for analyzing coherent anti-Stokes
Raman scattering (CARS) spectra. Our quantitative analysis includes statistical
estimation of constituent line-shape parameters, underlying Raman signal,
error-corrected CARS spectrum, and the measured CARS spectrum. As such, this
work enables extensive uncertainty quantification in the context of CARS
spectroscopy. Furthermore, we present an unsupervised method for improving
spectral resolution of Raman-like spectra requiring little to no \textit{a
priori} information. Finally, the recently-proposed wavelet prism method for
correcting the experimental artefacts in CARS is enhanced by using
interpolation techniques for wavelets. The method is validated using CARS
spectra of adenosine mono-, di-, and triphosphate in water, as well as,
equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide
combination sucrose
Declarative Modeling and Bayesian Inference of Dark Matter Halos
Probabilistic programming allows specification of probabilistic models in a
declarative manner. Recently, several new software systems and languages for
probabilistic programming have been developed on the basis of newly developed
and improved methods for approximate inference in probabilistic models. In this
contribution a probabilistic model for an idealized dark matter localization
problem is described. We first derive the probabilistic model for the inference
of dark matter locations and masses, and then show how this model can be
implemented using BUGS and Infer.NET, two software systems for probabilistic
programming. Finally, the different capabilities of both systems are discussed.
The presented dark matter model includes mainly non-conjugate factors, thus, it
is difficult to implement this model with Infer.NET.Comment: Presented at the Workshop "Intelligent Information Processing",
EUROCAST2013. To appear in selected papers of Computer Aided Systems Theory -
EUROCAST 2013; Volumes Editors: Roberto Moreno-D\'iaz, Franz R. Pichler,
Alexis Quesada-Arencibia; LNCS Springe
Probabilistic Logic Programming with Beta-Distributed Random Variables
We enable aProbLog---a probabilistic logical programming approach---to reason
in presence of uncertain probabilities represented as Beta-distributed random
variables. We achieve the same performance of state-of-the-art algorithms for
highly specified and engineered domains, while simultaneously we maintain the
flexibility offered by aProbLog in handling complex relational domains. Our
motivation is that faithfully capturing the distribution of probabilities is
necessary to compute an expected utility for effective decision making under
uncertainty: unfortunately, these probability distributions can be highly
uncertain due to sparse data. To understand and accurately manipulate such
probability distributions we need a well-defined theoretical framework that is
provided by the Beta distribution, which specifies a distribution of
probabilities representing all the possible values of a probability when the
exact value is unknown.Comment: Accepted for presentation at AAAI 201
Probabilistic Programming Concepts
A multitude of different probabilistic programming languages exists today,
all extending a traditional programming language with primitives to support
modeling of complex, structured probability distributions. Each of these
languages employs its own probabilistic primitives, and comes with a particular
syntax, semantics and inference procedure. This makes it hard to understand the
underlying programming concepts and appreciate the differences between the
different languages. To obtain a better understanding of probabilistic
programming, we identify a number of core programming concepts underlying the
primitives used by various probabilistic languages, discuss the execution
mechanisms that they require and use these to position state-of-the-art
probabilistic languages and their implementation. While doing so, we focus on
probabilistic extensions of logic programming languages such as Prolog, which
have been developed since more than 20 years
Model Checking Finite-Horizon Markov Chains with Probabilistic Inference
We revisit the symbolic verification of Markov chains with respect to finite
horizon reachability properties. The prevalent approach iteratively computes
step-bounded state reachability probabilities. By contrast, recent advances in
probabilistic inference suggest symbolically representing all horizon-length
paths through the Markov chain. We ask whether this perspective advances the
state-of-the-art in probabilistic model checking. First, we formally describe
both approaches in order to highlight their key differences. Then, using these
insights we develop Rubicon, a tool that transpiles Prism models to the
probabilistic inference tool Dice. Finally, we demonstrate better scalability
compared to probabilistic model checkers on selected benchmarks. All together,
our results suggest that probabilistic inference is a valuable addition to the
probabilistic model checking portfolio -- with Rubicon as a first step towards
integrating both perspectives.Comment: Technical Report. Accepted at CAV 202
- …