9,730 research outputs found
Universal neural field computation
Turing machines and G\"odel numbers are important pillars of the theory of
computation. Thus, any computational architecture needs to show how it could
relate to Turing machines and how stable implementations of Turing computation
are possible. In this chapter, we implement universal Turing computation in a
neural field environment. To this end, we employ the canonical symbologram
representation of a Turing machine obtained from a G\"odel encoding of its
symbolic repertoire and generalized shifts. The resulting nonlinear dynamical
automaton (NDA) is a piecewise affine-linear map acting on the unit square that
is partitioned into rectangular domains. Instead of looking at point dynamics
in phase space, we then consider functional dynamics of probability
distributions functions (p.d.f.s) over phase space. This is generally described
by a Frobenius-Perron integral transformation that can be regarded as a neural
field equation over the unit square as feature space of a dynamic field theory
(DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with
rectangular support are mapped onto uniform p.d.f.s with rectangular support,
again. We call the resulting representation \emph{dynamic field automaton}.Comment: 21 pages; 6 figures. arXiv admin note: text overlap with
arXiv:1204.546
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
- …