7,427 research outputs found
Bayesian Inference of Recursive Sequences of Group Activities from Tracks
We present a probabilistic generative model for inferring a description of
coordinated, recursively structured group activities at multiple levels of
temporal granularity based on observations of individuals' trajectories. The
model accommodates: (1) hierarchically structured groups, (2) activities that
are temporally and compositionally recursive, (3) component roles assigning
different subactivity dynamics to subgroups of participants, and (4) a
nonparametric Gaussian Process model of trajectories. We present an MCMC
sampling framework for performing joint inference over recursive activity
descriptions and assignment of trajectories to groups, integrating out
continuous parameters. We demonstrate the model's expressive power in several
simulated and complex real-world scenarios from the VIRAT and UCLA Aerial Event
video data sets.Comment: 10 pages, 6 figures, in Proceedings of the 30th AAAI Conference on
Artificial Intelligence (AAAI'16), Phoenix, AZ, 201
Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks
Recurrent neural networks (RNNs) are widely used in computational
neuroscience and machine learning applications. In an RNN, each neuron computes
its output as a nonlinear function of its integrated input. While the
importance of RNNs, especially as models of brain processing, is undisputed, it
is also widely acknowledged that the computations in standard RNN models may be
an over-simplification of what real neuronal networks compute. Here, we suggest
that the RNN approach may be made both neurobiologically more plausible and
computationally more powerful by its fusion with Bayesian inference techniques
for nonlinear dynamical systems. In this scheme, we use an RNN as a generative
model of dynamic input caused by the environment, e.g. of speech or kinematics.
Given this generative RNN model, we derive Bayesian update equations that can
decode its output. Critically, these updates define a 'recognizing RNN' (rRNN),
in which neurons compute and exchange prediction and prediction error messages.
The rRNN has several desirable features that a conventional RNN does not have,
for example, fast decoding of dynamic stimuli and robustness to initial
conditions and noise. Furthermore, it implements a predictive coding scheme for
dynamic inputs. We suggest that the Bayesian inversion of recurrent neural
networks may be useful both as a model of brain function and as a machine
learning tool. We illustrate the use of the rRNN by an application to the
online decoding (i.e. recognition) of human kinematics
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
Fast, scalable, Bayesian spike identification for multi-electrode arrays
We present an algorithm to identify individual neural spikes observed on
high-density multi-electrode arrays (MEAs). Our method can distinguish large
numbers of distinct neural units, even when spikes overlap, and accounts for
intrinsic variability of spikes from each unit. As MEAs grow larger, it is
important to find spike-identification methods that are scalable, that is, the
computational cost of spike fitting should scale well with the number of units
observed. Our algorithm accomplishes this goal, and is fast, because it
exploits the spatial locality of each unit and the basic biophysics of
extracellular signal propagation. Human intervention is minimized and
streamlined via a graphical interface. We illustrate our method on data from a
mammalian retina preparation and document its performance on simulated data
consisting of spikes added to experimentally measured background noise. The
algorithm is highly accurate
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time
Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of “biologically basic to socially specific” information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four
- …