17,095 research outputs found
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Building artificial neural circuits for domain-general cognition: a primer on brain-inspired systems-level architecture
There is a concerted effort to build domain-general artificial intelligence
in the form of universal neural network models with sufficient computational
flexibility to solve a wide variety of cognitive tasks but without requiring
fine-tuning on individual problem spaces and domains. To do this, models need
appropriate priors and inductive biases, such that trained models can
generalise to out-of-distribution examples and new problem sets. Here we
provide an overview of the hallmarks endowing biological neural networks with
the functionality needed for flexible cognition, in order to establish which
features might also be important to achieve similar functionality in artificial
systems. We specifically discuss the role of system-level distribution of
network communication and recurrence, in addition to the role of short-term
topological changes for efficient local computation. As machine learning models
become more complex, these principles may provide valuable directions in an
otherwise vast space of possible architectures. In addition, testing these
inductive biases within artificial systems may help us to understand the
biological principles underlying domain-general cognition.Comment: This manuscript is part of the AAAI 2023 Spring Symposium on the
Evaluation and Design of Generalist Systems (EDGeS
Representation Internal-Manipulation (RIM): A Neuro-Inspired Computational Theory of Consciousness
Many theories, based on neuroscientific and psychological empirical evidence
and on computational concepts, have been elaborated to explain the emergence of
consciousness in the central nervous system. These theories propose key
fundamental mechanisms to explain consciousness, but they only partially
connect such mechanisms to the possible functional and adaptive role of
consciousness. Recently, some cognitive and neuroscientific models try to solve
this gap by linking consciousness to various aspects of goal-directed
behaviour, the pivotal cognitive process that allows mammals to flexibly act in
challenging environments. Here we propose the Representation
Internal-Manipulation (RIM) theory of consciousness, a theory that links the
main elements of consciousness theories to components and functions of
goal-directed behaviour, ascribing a central role for consciousness to the
goal-directed manipulation of internal representations. This manipulation
relies on four specific computational operations to perform the flexible
internal adaptation of all key elements of goal-directed computation, from the
representations of objects to those of goals, actions, and plans. Finally, we
propose the concept of `manipulation agency' relating the sense of agency to
the internal manipulation of representations. This allows us to propose that
the subjective experience of consciousness is associated to the human capacity
to generate and control a simulated internal reality that is vividly perceived
and felt through the same perceptual and emotional mechanisms used to tackle
the external world.Comment: 16 pages, 5 figures, preprin
Probabilistic Meta-Representations Of Neural Networks
Existing Bayesian treatments of neural networks are typically characterized
by weak prior and approximate posterior distributions according to which all
the weights are drawn independently. Here, we consider a richer prior
distribution in which units in the network are represented by latent variables,
and the weights between units are drawn conditionally on the values of the
collection of those variables. This allows rich correlations between related
weights, and can be seen as realizing a function prior with a Bayesian
complexity regularizer ensuring simple solutions. We illustrate the resulting
meta-representations and representations, elucidating the power of this prior.Comment: presented at UAI 2018 Uncertainty In Deep Learning Workshop (UDL AUG.
2018
Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.
Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl\u27s gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination
- …