3 research outputs found
Recommended from our members
Towards a Cognitively Realistic Representation of Word Associations
The ability to associate words is an important cognitive skill.In this study we investigate different methods for representingword associations in the brain, using the Remote AssociatesTest (RAT) as a task. We explore representations derived fromfree association norms and statistical n-gram data. Althoughn-gram representations yield better performance on the test, acloser match with the human performance is obtained with rep-resentations derived from free associations. We propose thatword association strengths derived from free associations playan important role in the process of RAT solving. Furthermore,we show that this model can be implemented in spiking neu-rons, and estimate the number of biologically realistic neuronsthat would suffice for an accurate representation
A Spiking Neuron Model of Word Associations for the Remote Associates Test
Generating associations is important for cognitive tasks including language acquisition and creative problem solving. It remains an open question how the brain represents and processes associations. The Remote Associates Test (RAT) is a task, originally used in creativity research, that is heavily dependent on generating associations in a search for the solutions to individual RAT problems. In this work we present a model that solves the test. Compared to earlier modeling work on the RAT, our hybrid (i.e., non-developmental) model is implemented in a spiking neural network by means of the Neural Engineering Framework (NEF), demonstrating that it is possible for spiking neurons to be organized to store the employed representations and to manipulate them. In particular, the model shows that distributed representations can support sophisticated linguistic processing. The model was validated on human behavioral data including the typical length of response sequences and similarity relationships in produced responses. These data suggest two cognitive processes that are involved in solving the RAT: one process generates potential responses and a second process filters the responses
Computational Mechanisms of Language Understanding and Use in the Brain and Behaviour
Linguistic communication is a unique characteristic of intelligent behaviour
that distinguishes humans from non-human animals. Natural language is
a structured, complex communication system supported by a variety of cognitive
functions, realized by hundreds of millions of neurons in the brain. Artificial
neural networks typically used in natural language processing (NLP) are often
designed to focus on benchmark performance, where one of the main goals is
reaching the state-of-the-art performance on a set of language tasks. Although
the advances in NLP have been tremendous in the past decade, such networks
provide only limited insights into biological mechanisms underlying linguistic
processing in the brain.
In this thesis, we propose an integrative approach to the study of
computational mechanisms underlying fundamental language processes, spanning
biologically plausible neural networks, and learning of basic communicative
abilities through environmentally grounded behaviour. In doing so, we argue for
the usage-based approach to language, where language is supported by a variety
of cognitive functions and learning mechanisms. Thus, we focus on the three
following questions: How are basic linguistic units, such as words, represented
in the brain? Which neural mechanisms operate on those representations in
cognitive tasks? How can aspects of such representations, such as associative
similarity and structure, be learned in a usage-based framework?
To answer the first two questions, we build novel, biologically realistic
models of neural function that perform different semantic processing tasks: the
Remote Associates Test (RAT) and the semantic fluency task. Both tasks have
been used in experimental and clinical environments to study organizational
principles and retrieval mechanisms from semantic memory. The models we propose
realize the mental lexicon and cognitive retrieval processes operating on that
lexicon using associative mechanisms in a biologically plausible manner. We
argue that such models are the first and only biologically plausible models
that propose specific mechanisms as well as reproduce a wide range of human
behavioural data on those tasks, further corroborating their plausibility.
To address the last question, we use an interactive, collaborative agent-based
reinforcement learning setup in a navigation task where agents learn to
communicate to solve the task. We argue that agents in such a setup learn to
jointly coordinate their actions, and develop a communication protocol that is
often optimal for the performance on the task, while exhibiting some core
properties of language, such as representational similarity structure and
compositionality, essential for associative mechanisms underlying cognitive
representations