1,032 research outputs found

    A Defense of Pure Connectionism

    Full text link
    Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association

    Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks

    Get PDF
    Neural networks are successfully used to imitate and model cognitive processes. However, to provide clues about the neurobiological mechanisms enabling human cognition, these models need to mimic the structure and function of real brains. Brain-constrained networks differ from classic neural networks by implementing brain similarities at different scales, ranging from the micro- and mesoscopic levels of neuronal function, local neuronal links and circuit interaction to large-scale anatomical structure and between-area connectivity. This review shows how brain-constrained neural networks can be applied to study in silico the formation of mechanisms for symbol and concept processing and to work towards neurobiological explanations of specifically human cognitive abilities. These include verbal working memory and learning of large vocabularies of symbols, semantic binding carried by specific areas of cortex, attention focusing and modulation driven by symbol type, and the acquisition of concrete and abstract concepts partly influenced by symbols. Neuronal assembly activity in the networks is analyzed to deliver putative mechanistic correlates of higher cognitive processes and to develop candidate explanations founded in established neurobiological principles

    Second Workshop on Modelling of Objects, Components and Agents

    Get PDF
    This report contains the proceedings of the workshop Modelling of Objects, Components, and Agents (MOCA'02), August 26-27, 2002.The workshop is organized by the 'Coloured Petri Net' Group at the University of Aarhus, Denmark and the 'Theoretical Foundations of Computer Science' Group at the University of Hamburg, Germany. The homepage of the workshop is: http://www.daimi.au.dk/CPnets/workshop02

    What does semantic tiling of the cortex tell us about semantics?

    Get PDF
    Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions

    The Representation of Objects in the Brain, and Its Link with Semantic Memory and Language: a Conceptual Theory with the Support of a Neurocomputational Model

    Get PDF
    Recognition of objects, their representation and retrieval in memory and the link of this representation with words is a hard cognitive problem, which can be summarized with the term “lexico-semantic memory”. Several recent cognitive theories suggest that the semantic representation of objects is a distributed process, which engages different brain areas in the sensory and motor regions. A further common hypothesis is that each region is organized by conceptual features, that are highly correlated, and neurally contiguous. These theories may be useful to explain the results of clinical tests on patients with lesions of the brain, who exhibit deficits in recognizing objects from words or in evoking words from objects, or to explain the use of appropriate words in bilingual subjects. The study of the cognitive aspects of lexico-semantic memory representation may benefit from the use of mathematical models and computer simulations. Aim of this chapter is to describe a theoretical model of the lexico-semantic system, which can be used by cognitive neuroscientists to summarize conceptual theories into a rigorous quantitative framework, to test the ability of these theories to reproduce real pieces of behavior in healthy and pathological subjects, and to suggest new hypotheses for subsequent testing. The chapter is structured as follows: first the basic assumptions on cognitive aspects of the lexico-semantic memory model are clearly presented; the same aspects are subsequently illustrated via the results of computer simulations using abstract object representations as input to the model. Equations are then reported in an Appendix for readers interested to mathematical issues. The model is based on the following main assumptions: i) an object is represented as a collection of features, topologically ordered according to a similarity principle in different brain areas; ii) the features belonging to the same object are linked together via a Hebbian process during a phase in which objects are presented individually; iii) features are described via neural oscillators in the gamma band. As a consequence, different object representations can be maintained simultaneously in memory, via synchronization of the corresponding features (binding and segmentation problem); iv) words are represented in a lexical area devoted to recognition of words from phonemes; v) words in the lexical area and the features representing objects are linked together via a Hebbian mechanism during a learning phase in which a word is presented together with the corresponding object; vi) the same object representation can be associated to two alternative words (for instance to represent bilinguism). In this case, the two words are connected via inhibitory synapses, to implement a competition among them. vii) the choice of words is further selected by an external inhibitory control system, which suppresses words which do not correspond to the present objective (for instance to choose between alternative languages). Several exempla of model possibilities are presented, with the use of abstract words. These exempla comprehend: the possibility to retrieve objects and words even in case of incomplete or corrupted information on object features; the possibility to establish a semantic link between words with superimposed features; the process of learning a second language (L2) with the support of a language previously known (L1) to represent neurocognitive aspects of bilinguism

    Mental content : consequences of the embodied mind paradigm

    Get PDF
    The central difference between objectivist cognitivist semantics and embodied cognition consists in the fact that the latter is, in contrast to the former, mindful of binding meaning to context-sensitive mental systems. According to Lakoff/Johnson's experientialism, conceptual structures arise from preconceptual kinesthetic image-schematic and basic-level structures. Gallese and Lakoff introduced the notion of exploiting sensorimotor structures for higherlevel cognition. Three different types of X-schemas realise three types of environmentally embedded simulation: Areas that control movements in peri-personal space; canonical neurons of the ventral premotor cortex that fire when a graspable object is represented; the firing of mirror neurons while perceiving certain movements of conspecifics. ..

    Semantic Memory

    Get PDF
    How is it that we know what a dog and a tree are, or, for that matter, what knowledge is? Our semantic memory consists of knowledge about the world, including concepts, facts and beliefs. This knowledge is essential for recognizing entities and objects, and for making inferences and predictions about the world. In essence, our semantic knowledge determines how we understand and interact with the world around us. In this chapter, we examine semantic memory from cognitive, sensorimotor, cognitive neuroscientific, and computational perspectives. We consider the cognitive and neural processes (and biases) that allow people to learn and represent concepts, and discuss how and where in the brain sensory and motor information may be integrated to allow for the perception of a coherent “concept”. We suggest that our understanding of semantic memory can be enriched by considering how semantic knowledge develops across the lifespan within individuals

    Breakdown of category-specific word representations in a brain-constrained neurocomputational model of semantic dementia

    Get PDF
    The neurobiological nature of semantic knowledge, i.e., the encoding and storage of conceptual information in the human brain, remains a poorly understood and hotly debated subject. Clinical data on semantic deficits and neuroimaging evidence from healthy individuals have suggested multiple cortical regions to be involved in the processing of meaning. These include semantic hubs (most notably, anterior temporal lobe, ATL) that take part in semantic processing in general as well as sensorimotor areas that process specific aspects/categories according to their modality. Biologically inspired neurocomputational models can help elucidate the exact roles of these regions in the functioning of the semantic system and, importantly, in its breakdown in neurological deficits. We used a neuroanatomically constrained computational model of frontotemporal cortices implicated in word acquisition and processing, and adapted it to simulate and explain the effects of semantic dementia (SD) on word processing abilities. SD is a devastating, yet insufficiently understood progressive neurodegenerative disease, characterised by semantic knowledge deterioration that is hypothesised to be specifically related to neural damage in the ATL. The behaviour of our brain-based model is in full accordance with clinical data—namely, word comprehension performance decreases as SD lesions in ATL progress, whereas word repetition abilities remain less affected. Furthermore, our model makes predictions about lesion- and category-specific effects of SD: our simulation results indicate that word processing should be more impaired for object- than for action-related words, and that degradation of white matter should produce more severe consequences than the same proportion of grey matter decay. In sum, the present results provide a neuromechanistic explanatory account of cortical-level language impairments observed during the onset and progress of semantic dementia
    • …
    corecore