606 research outputs found
Bit-Vector Model Counting using Statistical Estimation
Approximate model counting for bit-vector SMT formulas (generalizing \#SAT)
has many applications such as probabilistic inference and quantitative
information-flow security, but it is computationally difficult. Adding random
parity constraints (XOR streamlining) and then checking satisfiability is an
effective approximation technique, but it requires a prior hypothesis about the
model count to produce useful results. We propose an approach inspired by
statistical estimation to continually refine a probabilistic estimate of the
model count for a formula, so that each XOR-streamlined query yields as much
information as possible. We implement this approach, with an approximate
probability model, as a wrapper around an off-the-shelf SMT solver or SAT
solver. Experimental results show that the implementation is faster than the
most similar previous approaches which used simpler refinement strategies. The
technique also lets us model count formulas over floating-point constraints,
which we demonstrate with an application to a vulnerability in differential
privacy mechanisms
Dynamic Importance Sampling in Bayesian Networks Based on Probability Trees
In this paper we introduce a new dynamic importance sampling propagation algorithm for Bayesian networks. Importance sampling is based on using an auxiliary sampling distribution from which a set of con gurations of the variables in the network is drawn, and the performance of the algorithm depends on the variance of the weights associated with the simulated con gurations. The basic idea of dynamic importance sampling is to use the simulation of a con guration to modify the sampling
distribution in order to improve its quality and so reducing the variance of the future weights. The paper shows that this can be achieved with a low computational effort. The experiments carried out show that the nal results can be very good even in the case that the initial sampling distribution is far away from the optimum
Learning and Type Compatibility in Signaling Games
Which equilibria will arise in signaling games depends on how the receiver
interprets deviations from the path of play. We develop a micro-foundation for
these off-path beliefs, and an associated equilibrium refinement, in a model
where equilibrium arises through non-equilibrium learning by populations of
patient and long-lived senders and receivers. In our model, young senders are
uncertain about the prevailing distribution of play, so they rationally send
out-of-equilibrium signals as experiments to learn about the behavior of the
population of receivers. Differences in the payoff functions of the types of
senders generate different incentives for these experiments. Using the Gittins
index (Gittins, 1979), we characterize which sender types use each signal more
often, leading to a constraint on the receiver's off-path beliefs based on
"type compatibility" and hence a learning-based equilibrium selection
Probabilistic Bag-Of-Hyperlinks Model for Entity Linking
Many fundamental problems in natural language processing rely on determining
what entities appear in a given text. Commonly referenced as entity linking,
this step is a fundamental component of many NLP tasks such as text
understanding, automatic summarization, semantic search or machine translation.
Name ambiguity, word polysemy, context dependencies and a heavy-tailed
distribution of entities contribute to the complexity of this problem.
We here propose a probabilistic approach that makes use of an effective
graphical model to perform collective entity disambiguation. Input mentions
(i.e.,~linkable token spans) are disambiguated jointly across an entire
document by combining a document-level prior of entity co-occurrences with
local information captured from mentions and their surrounding context. The
model is based on simple sufficient statistics extracted from data, thus
relying on few parameters to be learned.
Our method does not require extensive feature engineering, nor an expensive
training procedure. We use loopy belief propagation to perform approximate
inference. The low complexity of our model makes this step sufficiently fast
for real-time usage. We demonstrate the accuracy of our approach on a wide
range of benchmark datasets, showing that it matches, and in many cases
outperforms, existing state-of-the-art methods
Default reasoning using maximum entropy and variable strength defaults
PhDThe thesis presents a computational model for reasoning with partial information
which uses default rules or information about what normally happens. The idea is
to provide a means of filling the gaps in an incomplete world view with the most
plausible assumptions while allowing for the retraction of conclusions should they
subsequently turn out to be incorrect. The model can be used both to reason from
a given knowledge base of default rules, and to aid in the construction of such
knowledge bases by allowing their designer to compare the consequences of his
design with his own default assumptions. The conclusions supported by the proposed
model are justified by the use of a probabilistic semantics for default rules
in conjunction with the application of a rational means of inference from incomplete
knowledge the principle of maximum entropy (ME). The thesis develops
both the theory and algorithms for the ME approach and argues that it should be
considered as a general theory of default reasoning.
The argument supporting the thesis has two main threads. Firstly, the ME approach
is tested on the benchmark examples required of nonmonotonic behaviour,
and it is found to handle them appropriately. Moreover, these patterns of commonsense
reasoning emerge as consequences of the chosen semantics rather than
being design features. It is argued that this makes the ME approach more objective,
and its conclusions more justifiable, than other default systems. Secondly, the
ME approach is compared with two existing systems: the lexicographic approach
(LEX) and system Z+. It is shown that the former can be equated with ME under
suitable conditions making it strictly less expressive, while the latter is too crude to
perform the subtle resolution of default conflict which the ME approach allows. Finally,
a program called DRS is described which implements all systems discussed
in the thesis and provides a tool for testing their behaviours.Engineering and Physical Science Research Council (EPSRC
- …