7 research outputs found

    A mathematical neural process model of language comprehension, from syllable to sentence

    No full text
    Human beings effortlessly perceive structural meaning from a biophysical signal such as speech or sign. An explanation of the processes underlying this phenomenon needs to account for both the properties of the signal, as well as those of the neural architecture deploying linguistic knowledge. This article approaches the question from a mathematical perspective, providing a neurophysiologically grounded explanation of the process underlying linguistic structure building in the brain. For defining the properties of the signal, we rely on the mathematical linguistics of DisCoCat (Coecke et al. 2010), a syntactically sensitive formalism of distributional meaning that makes no claims about the neural processing underlying sentence comprehension. The neuroscientific architecture derives from a cue-integration model of language comprehension (Martin 2020). In it, the brain infers the latent structure of a cue or signal based on knowledge of the language, through a process of neural coordinate transform. In this work, we demonstrate how the DisCoCat formalism can interface with the neurophysiological process model and describe how the resulting incremental process model can return a formal description at each timestep. Second, we present an extension to show how structure building from phonological segments to syllables can be modelled within a categorial grammar setup, integrating it with our process model. Third, we introduce a temporal metric interpretation on the transformations occuring within the extended DisCoCat formalism for each level of representation - also known as categorical enrichment. As a result of this specification, we obtain a mechanistic account of neural oscillatory readouts during language comprehension

    Inferring the nature of linguistic computations in the brain

    Get PDF
    Sentences contain structure that determines their meaning beyond that of individual words. An influential study by Ding and colleagues (2016) used frequency tagging of phrases and sentences to show that the human brain is sensitive to structure by finding peaks of neural power at the rate at which structures were presented. Since then, there has been a rich debate on how to best explain this pattern of results with profound impact on the language sciences. Models that use hierarchical structure building, as well as models based on associative sequence processing, can predict the neural response, creating an inferential impasse as to which class of models explains the nature of the linguistic computations reflected in the neural readout. In the current manuscript, we discuss pitfalls and common fallacies seen in the conclusions drawn in the literature illustrated by various simulations. We conclude that inferring the neural operations of sentence processing based on these neural data, and any like it, alone, is insufficient. We discuss how to best evaluate models and how to approach the modeling of neural readouts to sentence processing in a manner that remains faithful to cognitive, neural, and linguistic principles

    Hierarchical structure in language and action: A formal comparison

    No full text
    Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this paper, we compare the conceptual and formal properties of hierarchy in language and action using tools from category theory. We show that the strong compositionality of language requires a formalism that describes the mapping between sentences and their syntactic structures as an order-embedded Galois connection, while the weak compositionality of actions only requires a monotonic mapping between action sequences and their goals, which we model as a monotone Galois connection. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences, and discuss the implications for the way both systems are represented in the brain
    corecore