2,881 research outputs found
The Libra Toolkit for Probabilistic Models
The Libra Toolkit is a collection of algorithms for learning and inference
with discrete probabilistic models, including Bayesian networks, Markov
networks, dependency networks, and sum-product networks. Compared to other
toolkits, Libra places a greater emphasis on learning the structure of
tractable models in which exact inference is efficient. It also includes a
variety of algorithms for learning graphical models in which inference is
potentially intractable, and for performing exact and approximate inference.
Libra is released under a 2-clause BSD license to encourage broad use in
academia and industry
Maximum A Posteriori Inference in Sum-Product Networks
Sum-product networks (SPNs) are a class of probabilistic graphical models
that allow tractable marginal inference. However, the maximum a posteriori
(MAP) inference in SPNs is NP-hard. We investigate MAP inference in SPNs from
both theoretical and algorithmic perspectives. For the theoretical part, we
reduce general MAP inference to its special case without evidence and hidden
variables; we also show that it is NP-hard to approximate the MAP problem to
for fixed , where is the input size.
For the algorithmic part, we first present an exact MAP solver that runs
reasonably fast and could handle SPNs with up to 1k variables and 150k arcs in
our experiments. We then present a new approximate MAP solver with a good
balance between speed and accuracy, and our comprehensive experiments on
real-world datasets show that it has better overall performance than existing
approximate solvers
Knowledge Compilation of Logic Programs Using Approximation Fixpoint Theory
To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of
ICLP 2015
Recent advances in knowledge compilation introduced techniques to compile
\emph{positive} logic programs into propositional logic, essentially exploiting
the constructive nature of the least fixpoint computation. This approach has
several advantages over existing approaches: it maintains logical equivalence,
does not require (expensive) loop-breaking preprocessing or the introduction of
auxiliary variables, and significantly outperforms existing algorithms.
Unfortunately, this technique is limited to \emph{negation-free} programs. In
this paper, we show how to extend it to general logic programs under the
well-founded semantics.
We develop our work in approximation fixpoint theory, an algebraical
framework that unifies semantics of different logics. As such, our algebraical
results are also applicable to autoepistemic logic, default logic and abstract
dialectical frameworks
Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting
Weighted model counting (WMC) has emerged as a prevalent approach for
probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF
counting (weighted #DNF) is a special case, where approximations with
probabilistic guarantees are obtained in O(nm), where n denotes the number of
variables, and m the number of clauses of the input DNF, but this is not
scalable in practice. In this paper, we propose a neural model counting
approach for weighted #DNF that combines approximate model counting with deep
learning, and accurately approximates model counts in linear time when width is
bounded. We conduct experiments to validate our method, and show that our model
learns and generalizes very well to large-scale #DNF instances.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on
Artificial Intelligence (AAAI-20). Code and data available at:
https://github.com/ralphabb/NeuralDNF
Logical Abstractions for Noisy Variational Quantum Algorithm Simulation
Due to the unreliability and limited capacity of existing quantum computer
prototypes, quantum circuit simulation continues to be a vital tool for
validating next generation quantum computers and for studying variational
quantum algorithms, which are among the leading candidates for useful quantum
computation. Existing quantum circuit simulators do not address the common
traits of variational algorithms, namely: 1) their ability to work with noisy
qubits and operations, 2) their repeated execution of the same circuits but
with different parameters, and 3) the fact that they sample from circuit final
wavefunctions to drive a classical optimization routine. We present a quantum
circuit simulation toolchain based on logical abstractions targeted for
simulating variational algorithms. Our proposed toolchain encodes quantum
amplitudes and noise probabilities in a probabilistic graphical model, and it
compiles the circuits to logical formulas that support efficient repeated
simulation of and sampling from quantum circuits for different parameters.
Compared to state-of-the-art state vector and density matrix quantum circuit
simulators, our simulation approach offers greater performance when sampling
from noisy circuits with at least eight to 20 qubits and with around 12
operations on each qubit, making the approach ideal for simulating near-term
variational quantum algorithms. And for simulating noise-free shallow quantum
circuits with 32 qubits, our simulation approach offers a reduction
in sampling cost versus quantum circuit simulation techniques based on tensor
network contraction.Comment: ASPLOS '21, April 19-23, 2021, Virtual, US
Conditional Sum-Product Networks: Imposing Structure on Deep Probabilistic Architectures
Probabilistic graphical models are a central tool in AI; however, they are
generally not as expressive as deep neural models, and inference is notoriously
hard and slow. In contrast, deep probabilistic models such as sum-product
networks (SPNs) capture joint distributions in a tractable fashion, but still
lack the expressive power of intractable models based on deep neural networks.
Therefore, we introduce conditional SPNs (CSPNs), conditional density
estimators for multivariate and potentially hybrid domains which allow
harnessing the expressive power of neural networks while still maintaining
tractability guarantees. One way to implement CSPNs is to use an existing SPN
structure and condition its parameters on the input, e.g., via a deep neural
network. This approach, however, might misrepresent the conditional
independence structure present in data. Consequently, we also develop a
structure-learning approach that derives both the structure and parameters of
CSPNs from data. Our experimental evidence demonstrates that CSPNs are
competitive with other probabilistic models and yield superior performance on
multilabel image classification compared to mean field and mixture density
networks. Furthermore, they can successfully be employed as building blocks for
structured probabilistic models, such as autoregressive image models.Comment: 13 pages, 6 figure
Recommended from our members
Bayesian inference implemented on FPGA with stochastic bitstreams for an autonomous robot
- …