11,032 research outputs found
Causal Effect Inference with Deep Latent-Variable Models
Learning individual-level causal effects from observational data, such as
inferring the most effective medication for a specific patient, is a problem of
growing importance for policy makers. The most important aspect of inferring
causal effects from observational data is the handling of confounders, factors
that affect both an intervention and its outcome. A carefully designed
observational study attempts to measure all important confounders. However,
even if one does not have direct access to all confounders, there may exist
noisy and uncertain measurement of proxies for confounders. We build on recent
advances in latent variable modeling to simultaneously estimate the unknown
latent space summarizing the confounders and the causal effect. Our method is
based on Variational Autoencoders (VAE) which follow the causal structure of
inference with proxies. We show our method is significantly more robust than
existing methods, and matches the state-of-the-art on previous benchmarks
focused on individual treatment effects.Comment: Published as a conference paper at NIPS 201
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.Comment: (Under review
- …