1,915 research outputs found
Consciousness is learning: predictive processing systems that learn by binding may perceive themselves as conscious
Machine learning algorithms have achieved superhuman performance in specific
complex domains. Yet learning online from few examples and efficiently
generalizing across domains remains elusive. In humans such learning proceeds
via declarative memory formation and is closely associated with consciousness.
Predictive processing has been advanced as a principled Bayesian inference
framework for understanding the cortex as implementing deep generative
perceptual models for both sensory data and action control. However, predictive
processing offers little direct insight into fast compositional learning or the
mystery of consciousness. Here we propose that through implementing online
learning by hierarchical binding of unpredicted inferences, a predictive
processing system may flexibly generalize in novel situations by forming
working memories for perceptions and actions from single examples, which can
become short- and long-term declarative memories retrievable by associative
recall. We argue that the contents of such working memories are unified yet
differentiated, can be maintained by selective attention and are consistent
with observations of masking, postdictive perceptual integration, and other
paradigm cases of consciousness research. We describe how the brain could have
evolved to use perceptual value prediction for reinforcement learning of
complex action policies simultaneously implementing multiple survival and
reproduction strategies. 'Conscious experience' is how such a learning system
perceptually represents its own functioning, suggesting an answer to the meta
problem of consciousness. Our proposal naturally unifies feature binding,
recurrent processing, and predictive processing with global workspace, and, to
a lesser extent, the higher order theories of consciousness.Comment: This version adds 5 figures (new) and only modifies the text to
reference the figure
The Relational Bottleneck as an Inductive Bias for Efficient Abstraction
A central challenge for cognitive science is to explain how abstract concepts
are acquired from limited experience. This effort has often been framed in
terms of a dichotomy between empiricist and nativist approaches, most recently
embodied by debates concerning deep neural networks and symbolic cognitive
models. Here, we highlight a recently emerging line of work that suggests a
novel reconciliation of these approaches, by exploiting an inductive bias that
we term the relational bottleneck. We review a family of models that employ
this approach to induce abstractions in a data-efficient manner, emphasizing
their potential as candidate models for the acquisition of abstract concepts in
the human mind and brain
A Review of Findings from Neuroscience and Cognitive Psychology as Possible Inspiration for the Path to Artificial General Intelligence
This review aims to contribute to the quest for artificial general
intelligence by examining neuroscience and cognitive psychology methods for
potential inspiration. Despite the impressive advancements achieved by deep
learning models in various domains, they still have shortcomings in abstract
reasoning and causal understanding. Such capabilities should be ultimately
integrated into artificial intelligence systems in order to surpass data-driven
limitations and support decision making in a way more similar to human
intelligence. This work is a vertical review that attempts a wide-ranging
exploration of brain function, spanning from lower-level biological neurons,
spiking neural networks, and neuronal ensembles to higher-level concepts such
as brain anatomy, vector symbolic architectures, cognitive and categorization
models, and cognitive architectures. The hope is that these concepts may offer
insights for solutions in artificial general intelligence.Comment: 143 pages, 49 figures, 244 reference
A Defense of Pure Connectionism
Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production.
Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association
A new class of neural architectures to model episodic memory : computational studies of distal reward learning
A computational cognitive neuroscience model is proposed, which models episodic memory based on the mammalian brain. A computational neural architecture instantiates the proposed model and is tested on a particular task of distal reward learning. Categorical Neural Semantic Theory informs the architecture design. To experiment upon the computational brain model, embodiment and an environment in which the embodiment exists are simulated. This simulated environment realizes the Morris Water Maze task, a well established biological experimental test of distal reward learning. The embodied neural architecture is treated as a virtual rat and the environment it acts in as a virtual water tank. Performance levels of the neural architectures are evaluated through analysis of embodied behavior in the distal reward learning task. Comparison is made to biological rat experimental data, as well as comparison to other published models. In addition, differences in performance are compared between the normal and categorically informed versions of the architecture
- …