8,655 research outputs found
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
Reward prediction error and declarative memory
Learning based on reward prediction error (RPE) was originally proposed in the context of nondeclarative memory. We postulate that RPE may support declarative memory as well. Indeed, recent years have witnessed a number of independent empirical studies reporting effects of RPE on declarative memory. We provide a brief overview of these studies, identify emerging patterns, and discuss open issues such as the role of signed versus unsigned RPEs in declarative learning
The Challenge of Believability in Video Games: Definitions, Agents Models and Imitation Learning
In this paper, we address the problem of creating believable agents (virtual
characters) in video games. We consider only one meaning of believability,
``giving the feeling of being controlled by a player'', and outline the problem
of its evaluation. We present several models for agents in games which can
produce believable behaviours, both from industry and research. For high level
of believability, learning and especially imitation learning seems to be the
way to go. We make a quick overview of different approaches to make video
games' agents learn from players. To conclude we propose a two-step method to
develop new models for believable agents. First we must find the criteria for
believability for our application and define an evaluation method. Then the
model and the learning algorithm can be designed
A Blueprint for the Hard Problem of Consciousness
A Blueprint for the Hard Problem of Consciousness addresses the fundamental mechanism that allows physical events to transcend into subjective experiences, termed the Hard Problem of Consciousness.
Consciousness is made available as the abstract product of self-referent realization of information by strange loops through the levels of processing of the brain. Readers are introduced to the concept of the Hard Problem of Consciousness and related concepts followed by a critical discourse of different theories of consciousness.
Next, the author identifies the fundamental flaw of the Integrated Information Theory (IIT) and proposes an alternative that avoids the cryptic intelligent design and panpsychism of the IIT. This author also demonstrates how something can be created out of nothing without resorting to quantum theory, while pointing out neurobiological alternatives to the bottom-up approach of quantum theories of consciousness.
The book then delves into the philosophy of qualia in different physiological knowledge networks (spatial, temporal and olfactory, cortical signals, for example) to explain an action-based model consistent with the generational principles of Predictive Coding, which maps prediction and predictive-error signals for perceptual representations supporting integrated goal-directed behaviors. Conscious experiences are considered the outcome of abstractions realized out of map overlays and provided by sustained oscillatory activity.
The key feature of this blueprint is that it offers a perspective of the Hard Problem of Consciousness from the point of view of the subject; the experience of âbeing the subjectâ is predicted to be the realization of inference inversely mapped out of hidden causes of global integrated actions.
The author explains the consistencies of his blueprint with ideas of the Global Neuronal Workspace and the Adaptive Resonance Theory of consciousness as well as with the empirical evidence supporting the Integrated Information Theory. A Blueprint for the Hard Problem of Consciousness offers a unique perspective to readers interested in the scientific philosophy and cognitive neuroscience theory in relation to models of the theory of consciousness
- âŠ