681 research outputs found

    Combinatorial structures and processing in Neural Blackboard Architectures

    Get PDF
    We discuss and illustrate Neural Blackboard Architectures (NBAs) as the basis for variable binding and combinatorial processing the brain. We focus on the NBA for sentence structure. NBAs are based on the notion that conceptual representations are in situ, hence cannot be copied or transported. Novel combinatorial struc- tures can be formed with these representations by embedding them in NBAs. We discuss and illustrate the main characteristics of this form of combinatorial pro- cessing. We also illustrate the NBA for sentence structures by simulating neural activity as found in recently reported intracranial brain observations. Furthermore, we will show how the NBA can account for ambiguity resolution and garden path effects in sentence processing

    Neural blackboard architectures of combinatorial structures in cognition

    Get PDF
    Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception

    Linking neural and symbolic representation and processing of conceptual structures

    Get PDF
    We compare and discuss representations in two cognitive architectures aimed at representing and processing complex conceptual (sentence-like) structures. First is the Neural Blackboard Architecture (NBA), which aims to account for representation and processing of complex and combinatorial conceptual structures in the brain. Second is IDyOT (Information Dynamics of Thinking), which derives sentence-like structures by learning statistical sequential regularities over a suitable corpus. Although IDyOT is designed at a level more abstract than the neural, so it is a model of cognitive function, rather than neural processing, there are strong similarities between the composite structures developed in IDyOT and the NBA. We hypothesize that these similarities form the basis of a combined architecture in which the individual strengths of each architecture are integrated. We outline and discuss the characteristics of this combined architecture, emphasizing the representation and processing of conceptual structures

    The role of recurrent networks in neural architectures of grounded cognition: learning of control

    Get PDF
    Recurrent networks have been used as neural models of language processing, with mixed results. Here, we discuss the role of recurrent networks in a neural architecture of grounded cognition. In particular, we discuss how the control of binding in this architecture can be learned. We trained a simple recurrent network (SRN) and a feedforward network (FFN) for this task. The results show that information from the architecture is needed as input for these networks to learn control of binding. Thus, both control systems are recurrent. We found that the recurrent system consisting of the architecture and an SRN or an FFN as a "core" can learn basic (but recursive) sentence structures. Problems with control of binding arise when the system with the SRN is tested on number of new sentence structures. In contrast, control of binding for these structures succeeds with the FFN. Yet, for some structures with (unlimited) embeddings, difficulties arise due to dynamical binding conflicts in the architecture itself. In closing, we discuss potential future developments of the architecture presented here

    Training neural networks to encode symbols enables combinatorial generalization

    Get PDF
    Combinatorial generalization - the ability to understand and produce novel combinations of already familiar elements - is considered to be a core capacity of the human mind and a major challenge to neural network models. A significant body of research suggests that conventional neural networks can't solve this problem unless they are endowed with mechanisms specifically engineered for the purpose of representing symbols. In this paper we introduce a novel way of representing symbolic structures in connectionist terms - the vectors approach to representing symbols (VARS), which allows training standard neural architectures to encode symbolic knowledge explicitly at their output layers. In two simulations, we show that neural networks not only can learn to produce VARS representations, but in doing so they achieve combinatorial generalization in their symbolic and non-symbolic output. This adds to other recent work that has shown improved combinatorial generalization under specific training conditions, and raises the question of whether specific mechanisms or training routines are needed to support symbolic processing

    From unified to specific theories of cognition

    Get PDF
    Abstract: This article discusses the unity of cognitive science that seemed to emerge in the 1950s, based on the computational view of cognition. This unity would entail that there is a single set of mechanisms (i.e. algorithms) for all cognitive behavior, in particular at the level of productive human cognition as exemplified in language and reasoning. In turn, this would imply that theories in psychology, and cognitive science in general, would consist of algorithms based on symbol manipulation as found in digital computing. However, a number of developments in recent decades cast doubt on this unity of cognitive science. Also, there are fundamental problems with the claim that cognitive theories are just algorithms. This article discusses some of these problems and suggests that, instead of unified theories of cognition, specific mechanisms for cognitive behavior in specific cognitive domains could be needed, with architectures that are tailor-made for specific forms of implementation. A sketch of such an architecture for language is presented, based on modifiable connection paths in small-world like network structures.Keywords: Connection Paths; Control of Activation; Small-world Networks; Symbol Manipulation; Unity of Cognition Dalle teoria unificate della cognizione a quelle specificheRiassunto: Questo articolo discute l’unità della scienza cognitiva che sembrava emergere negli Anni ’50 e che era basata su una concezione computazionale della cognizione. Questa unità prevedeva l’esistenza di un singolo insieme di meccanismi (algoritmi) per tutti i comportamenti cognitivi, in particolare al livello della cognizione umana produttiva come, per esempio, linguaggio e ragionamento. A sua volta ciò implicava che le teorie psicologiche e, più in generale della scienza cognitiva, prevedessero algoritmi basati sulla manipolazione di simboli come nella computazione digitale. E, tuttavia, diversi sviluppi degli ultimi decenni hanno messo in dubbio questa unità della scienza cognitiva. Affermare che le teorie cognitive sarebbero solo algoritmi presenta problemi di fondo. Questo articolo discute alcuni di questi problemi, suggerendo che, invece di teorie della cognizione unificata, si potrebbe aver bisogno di meccanismi specifici per il comportamento cognitivo in specifici domini cognitivi, con architetture ritagliate per specifiche forme di implementazione. Questo articolo presenta uno schizzo di una simile architettura per il linguaggio, basata su vie di connessione modificabili in piccoli mondi come le strutture di reti.Parole chiave: Vie di connessione; Controllo dell’attivazione; Reti di piccoli mondi; Manipolazione di simboli; Unità della cognizion
    • …
    corecore