9,452 research outputs found
Spintronics based Stochastic Computing for Efficient Bayesian Inference System
Bayesian inference is an effective approach for solving statistical learning
problems especially with uncertainty and incompleteness. However, inference
efficiencies are physically limited by the bottlenecks of conventional
computing platforms. In this paper, an emerging Bayesian inference system is
proposed by exploiting spintronics based stochastic computing. A stochastic
bitstream generator is realized as the kernel components by leveraging the
inherent randomness of spintronics devices. The proposed system is evaluated by
typical applications of data fusion and Bayesian belief networks. Simulation
results indicate that the proposed approach could achieve significant
improvement on inference efficiencies in terms of power consumption and
inference speed.Comment: accepted by ASPDAC 2018 conferenc
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics
In order to interact intelligently with objects in the world, animals must
first transform neural population responses into estimates of the dynamic,
unknown stimuli which caused them. The Bayesian solution to this problem is
known as a Bayes filter, which applies Bayes' rule to combine population
responses with the predictions of an internal model. In this paper we present a
method for learning to approximate a Bayes filter when the stimulus dynamics
are unknown. To do this we use the inferential properties of probabilistic
population codes to compute Bayes' rule, and train a neural network to compute
approximate predictions by the method of maximum likelihood. In particular, we
perform stochastic gradient descent on the negative log-likelihood with a novel
approximation of the gradient. We demonstrate our methods on a finite-state, a
linear, and a nonlinear filtering problem, and show how the hidden layer of the
neural network develops tuning curves which are consistent with findings in
experimental neuroscience.Comment: This is the final version, and has been accepted for publication in
Neural Computatio
Hardware implementation of Bayesian network building blocks with stochastic spintronic devices
Bayesian networks are powerful statistical models to understand causal
relationships in real-world probabilistic problems such as diagnosis,
forecasting, computer vision, etc. For systems that involve complex causal
dependencies among many variables, the complexity of the associated Bayesian
networks become computationally intractable. As a result, direct hardware
implementation of these networks is one promising approach to reducing power
consumption and execution time. However, the few hardware implementations of
Bayesian networks presented in literature rely on deterministic CMOS devices
that are not efficient in representing the inherently stochastic variables in a
Bayesian network. This work presents an experimental demonstration of a
Bayesian network building block implemented with naturally stochastic
spintronic devices. These devices are based on nanomagnets with perpendicular
magnetic anisotropy, initialized to their hard axes by the spin orbit torque
from a heavy metal under-layer utilizing the giant spin Hall effect, enabling
stochastic behavior. We construct an electrically interconnected network of two
stochastic devices and manipulate the correlations between their states by
changing connection weights and biases. By mapping given conditional
probability tables to the circuit hardware, we demonstrate that any two node
Bayesian networks can be implemented by our stochastic network. We then present
the stochastic simulation of an example case of a four node Bayesian network
using our proposed device, with parameters taken from the experiment. We view
this work as a first step towards the large scale hardware implementation of
Bayesian networks.Comment: 9 pages, 4 figure
QMDP-Net: Deep Learning for Planning under Partial Observability
This paper introduces the QMDP-net, a neural network architecture for
planning under partial observability. The QMDP-net combines the strengths of
model-free learning and model-based planning. It is a recurrent policy network,
but it represents a policy for a parameterized set of tasks by connecting a
model with a planning algorithm that solves the model, thus embedding the
solution structure of planning in a network learning architecture. The QMDP-net
is fully differentiable and allows for end-to-end training. We train a QMDP-net
on different tasks so that it can generalize to new ones in the parameterized
task set and "transfer" to other similar tasks beyond the set. In preliminary
experiments, QMDP-net showed strong performance on several robotic tasks in
simulation. Interestingly, while QMDP-net encodes the QMDP algorithm, it
sometimes outperforms the QMDP algorithm in the experiments, as a result of
end-to-end learning.Comment: NIPS 2017 camera-read
A building block for hardware belief networks
Belief networks represent a powerful approach to problems involving
probabilistic inference, but much of the work in this area is software based
utilizing standard deterministic hardware based on the transistor which
provides the gain and directionality needed to interconnect billions of them
into useful networks. This paper proposes a transistor like device that could
provide an analogous building block for probabilistic networks. We present two
proof-of-concept examples of belief networks, one reciprocal and one
non-reciprocal, implemented using the proposed device which is simulated using
experimentally benchmarked models.Comment: Keywords: stochastic, sigmoid, phase transition, spin glass,
frustration, reduced frustration, Ising model, Bayesian network, Boltzmann
machine. 23 pages, 9 figure
The impact of random actions on opinion dynamics
Opinion dynamics have fascinated researchers for centuries. The ability of
societies to learn as well as the emergence of irrational {\it herding} are
equally evident. The simplest example is that of agents that have to determine
a binary action, under peer pressure coming from the decisions observed. By
modifying several popular models for opinion dynamics so that agents
internalize actions rather than smooth estimates of what other people think, we
are able to prove that almost surely the actions final outcome remains random,
even though actions can be consensual or polarized depending on the model. This
is a theoretical confirmation that the mechanism that leads to the emergence of
irrational herding behavior lies in the loss of nuanced information regarding
the privately held beliefs behind the individuals decisions.Comment: 23 pages; 7 figure
Incremental Dynamic Construction of Layered Polytree Networks
Certain classes of problems, including perceptual data understanding,
robotics, discovery, and learning, can be represented as incremental,
dynamically constructed belief networks. These automatically constructed
networks can be dynamically extended and modified as evidence of new
individuals becomes available. The main result of this paper is the incremental
extension of the singly connected polytree network in such a way that the
network retains its singly connected polytree structure after the changes. The
algorithm is deterministic and is guaranteed to have a complexity of single
node addition that is at most of order proportional to the number of nodes (or
size) of the network. Additional speed-up can be achieved by maintaining the
path information. Despite its incremental and dynamic nature, the algorithm can
also be used for probabilistic inference in belief networks in a fashion
similar to other exact inference algorithms.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
Loopy Belief Propagation for Approximate Inference: An Empirical Study
Recently, researchers have demonstrated that loopy belief propagation - the
use of Pearls polytree algorithm IN a Bayesian network WITH loops OF error-
correcting codes.The most dramatic instance OF this IS the near Shannon - limit
performance OF Turbo Codes codes whose decoding algorithm IS equivalent TO
loopy belief propagation IN a chain - structured Bayesian network. IN this
paper we ask : IS there something special about the error - correcting code
context, OR does loopy propagation WORK AS an approximate inference schemeIN a
more general setting? We compare the marginals computed using loopy propagation
TO the exact ones IN four Bayesian network architectures, including two real -
world networks : ALARM AND QMR.We find that the loopy beliefs often converge
AND WHEN they do, they give a good approximation TO the correct
marginals.However,ON the QMR network, the loopy beliefs oscillated AND had no
obvious relationship TO the correct posteriors. We present SOME initial
investigations INTO the cause OF these oscillations, AND show that SOME simple
methods OF preventing them lead TO the wrong results.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
Consensus in the Presence of Multiple Opinion Leaders: Effect of Bounded Confidence
The problem of analyzing the performance of networked agents exchanging
evidence in a dynamic network has recently grown in importance. This problem
has relevance in signal and data fusion network applications and in studying
opinion and consensus dynamics in social networks. Due to its capability of
handling a wider variety of uncertainties and ambiguities associated with
evidence, we use the framework of Dempster-Shafer (DS) theory to capture the
opinion of an agent. We then examine the consensus among agents in dynamic
networks in which an agent can utilize either a cautious or receptive updating
strategy. In particular, we examine the case of bounded confidence updating
where an agent exchanges its opinion only with neighboring nodes possessing
'similar' evidence. In a fusion network, this captures the case in which nodes
only update their state based on evidence consistent with the node's own
evidence. In opinion dynamics, this captures the notions of Social Judgment
Theory (SJT) in which agents update their opinions only with other agents
possessing opinions closer to their own. Focusing on the two special DS
theoretic cases where an agent state is modeled as a Dirichlet body of evidence
and a probability mass function (p.m.f.), we utilize results from matrix
theory, graph theory, and networks to prove the existence of consensus agent
states in several time-varying network cases of interest. For example, we show
the existence of a consensus in which a subset of network nodes achieves a
consensus that is adopted by follower network nodes. Of particular interest is
the case of multiple opinion leaders, where we show that the agents do not
reach a consensus in general, but rather converge to 'opinion clusters'.
Simulation results are provided to illustrate the main results.Comment: IEEE Transactions on Signal and Information Processing Over Networks,
to appea
Second Order Probabilities for Uncertain and Conflicting Evidence
In this paper the elicitation of probabilities from human experts is
considered as a measurement process, which may be disturbed by random
'measurement noise'. Using Bayesian concepts a second order probability
distribution is derived reflecting the uncertainty of the input probabilities.
The algorithm is based on an approximate sample representation of the basic
probabilities. This sample is continuously modified by a stochastic simulation
procedure, the Metropolis algorithm, such that the sequence of successive
samples corresponds to the desired posterior distribution. The procedure is
able to combine inconsistent probabilities according to their reliability and
is applicable to general inference networks with arbitrary structure.
Dempster-Shafer probability mass functions may be included using specific
measurement distributions. The properties of the approach are demonstrated by
numerical experiments.Comment: Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990
- …