2,858 research outputs found
A General Framework for Anytime Approximation in Probabilistic Databases
Anytime approximation algorithms that compute the probabilities of queries
over probabilistic databases can be of great use to statistical learning tasks.
Those approaches have been based so far on either (i) sampling or (ii)
branch-and-bound with model-based bounds. We present here a more general
branch-and-bound framework that extends the possible bounds by using
'dissociation', which yields tighter bounds.Comment: 3 pages, 2 figures, submitted to StarAI 2018 Worksho
On the Complexity and Approximation of Binary Evidence in Lifted Inference
Lifted inference algorithms exploit symmetries in probabilistic models to
speed up inference. They show impressive performance when calculating
unconditional probabilities in relational models, but often resort to
non-lifted inference when computing conditional probabilities. The reason is
that conditioning on evidence breaks many of the model's symmetries, which can
preempt standard lifting techniques. Recent theoretical results show, for
example, that conditioning on evidence which corresponds to binary relations is
#P-hard, suggesting that no lifting is to be expected in the worst case. In
this paper, we balance this negative result by identifying the Boolean rank of
the evidence as a key parameter for characterizing the complexity of
conditioning in lifted inference. In particular, we show that conditioning on
binary evidence with bounded Boolean rank is efficient. This opens up the
possibility of approximating evidence by a low-rank Boolean matrix
factorization, which we investigate both theoretically and empirically.Comment: To appear in Advances in Neural Information Processing Systems 26
(NIPS), Lake Tahoe, USA, December 201
Knowledge Compilation of Logic Programs Using Approximation Fixpoint Theory
To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of
ICLP 2015
Recent advances in knowledge compilation introduced techniques to compile
\emph{positive} logic programs into propositional logic, essentially exploiting
the constructive nature of the least fixpoint computation. This approach has
several advantages over existing approaches: it maintains logical equivalence,
does not require (expensive) loop-breaking preprocessing or the introduction of
auxiliary variables, and significantly outperforms existing algorithms.
Unfortunately, this technique is limited to \emph{negation-free} programs. In
this paper, we show how to extend it to general logic programs under the
well-founded semantics.
We develop our work in approximation fixpoint theory, an algebraical
framework that unifies semantics of different logics. As such, our algebraical
results are also applicable to autoepistemic logic, default logic and abstract
dialectical frameworks
Parallel statistical model checking for safety verification in smart grids
By using small computing devices deployed at user premises, Autonomous Demand Response (ADR) adapts users electricity consumption to given time-dependent electricity tariffs. This allows end-users to save on their electricity bill and Distribution System Operators to optimise (through suitable time-dependent tariffs) management of the electric grid by avoiding demand peaks.
Unfortunately, even with ADR, users power consumption may deviate from the expected (minimum cost) one, e.g., because ADR devices fail to correctly forecast energy needs at user premises. As a result, the aggregated power demand may present undesirable peaks.
In this paper we address such a problem by presenting methods and a software tool (APD-Analyser) implementing them, enabling Distribution System Operators to effectively verify that a given time-dependent electricity tariff achieves the desired goals even when end-users deviate from their expected behaviour.
We show feasibility of the proposed approach through a realistic scenario from a medium voltage Danish distribution network
Recommended from our members
Structure identification in relational data
This paper presents several investigations into the prospects for identifying meaningful structures in empirical data, namely, structures permitting effective organization of the data to meet requirements of future queries. We propose a general framework whereby the notion of identifiability is given a precise formal definition similar to that of learnability. Using this framework, we then explore if a tractable procedure exists for deciding whether a given relation is decomposable into a constraint network or a CNF theory with desirable topology and, if the answer is positive, identifying the desired decomposition. Finally, we address the problem of expressing a given relation as a Horn theory and, if this is impossible, finding the best k-Horn approximation to the given relation. We show that both problems can be solved in time polynomial in the length of the data
- …