4 research outputs found

    A systematic review on artifact removal and classification techniques for enhanced MEG-based BCI systems

    Get PDF
    Neurological disease victims may be completely paralyzed and unable to move, but they may still be able to think. Their brain activity is the only means by which they can interact with their environment. Brain-Computer Interface (BCI) research attempts to create tools that support subjects with disabilities. Furthermore, BCI research has expanded rapidly over the past few decades as a result of the interest in creating a new kind of human-to-machine communication. As magnetoencephalography (MEG) has superior spatial and temporal resolution than other approaches, it is being utilized to measure brain activity non-invasively. The recorded signal includes signals related to brain activity as well as noise and artifacts from numerous sources. MEG can have a low signal-to-noise ratio because the magnetic fields generated by cortical activity are small compared to other artifacts and noise. By using the right techniques for noise and artifact detection and removal, the signal-to-noise ratio can be increased. This article analyses various methods for removing artifacts as well as classification strategies. Additionally, this offers a study of the influence of Deep Learning models on the BCI system. Furthermore, the various challenges in collecting and analyzing MEG signals as well as possible study fields in MEG-based BCI are examined

    Language and action in Broca’s area: Computational differentiation and cortical segregation

    Get PDF
    Actions have been proposed to follow hierarchical principles similar to those hypothesized for language syntax. These structural similarities are claimed to be reflected in the common involvement of certain neural populations of Broca’s area, in the Inferior Frontal Gyrus (IFG). In this position paper, we follow an influential hypothesis in linguistic theory to introduce the syntactic operation Merge and the corresponding motor/conceptual interfaces. We argue that actions hierarchies do not follow the same principles ruling language syntax. We propose that hierarchy in the action domain lies in predictive processing mechanisms mapping sensory inputs and statistical regularities of action-goal relationships. At the cortical level, distinct Broca’s subregions appear to support different types of computations across the two domains. We argue that anterior BA44 is a major hub for the implementation of the syntactic operation Merge. On the other hand, posterior BA44 is recruited in selecting premotor mental representations based on the information provided by contextual signals. This functional distinction is corroborated by a recent meta-analysis (Papitto, Friederici, & Zaccarella, 2020). We conclude by suggesting that action and language can meet only where the interfaces transfer abstract computations either to the external world or to the internal mental world

    Neocortical activity tracks the hierarchical linguistic structures of self-produced speech during reading aloud

    Get PDF
    | openaire: EC/H2020/743562/EU//DysTrackHow the human brain uses self-generated auditory information during speech production is rather unsettled. Current theories of language production consider a feedback monitoring system that monitors the auditory consequences of speech output and an internal monitoring system, which makes predictions about the auditory consequences of speech before its production. To gain novel insights into underlying neural processes, we investigated the coupling between neuromagnetic activity and the temporal envelope of the heard speech sounds (i.e., cortical tracking of speech) in a group of adults who 1) read a text aloud, 2) listened to a recording of their own speech (i.e., playback), and 3) listened to another speech recording. Reading aloud was here used as a particular form of speech production that shares various processes with natural speech. During reading aloud, the reader's brain tracked the slow temporal fluctuations of the speech output. Specifically, auditory cortices tracked phrases (<1 ​Hz) but to a lesserextent than during the two speech listening conditions. Also, the tracking of words (2–4 ​Hz) and syllables (4–8 ​Hz) occurred at parietal opercula during reading aloud and at auditory cortices during listening. Directionality analyses were then used to get insights into the monitoring systems involved in the processing of self-generated auditory information. Analyses revealed that the cortical tracking of speech at <1 ​Hz, 2–4 ​Hz and 4–8 ​Hz is dominated by speech-to-brain directional coupling during both reading aloud and listening, i.e., the cortical tracking of speech during reading aloud mainly entails auditory feedback processing. Nevertheless, brain-to-speech directional coupling at 4–8 ​Hz was enhanced during reading aloud compared with listening, likely reflecting the establishment of predictions about the auditory consequences of speech before production. These data bring novel insights into how auditory verbal information is tracked by the human brain during perception and self-generation of connected speech.Peer reviewe

    Natural Language Syntax Complies with the Free-Energy Principle

    Full text link
    Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating syntactic objects. We argue that recently proposed principles of economy in language design - such as "minimal search" criteria from theoretical syntax - adhere to the FEP. This affords a greater degree of explanatory power to the FEP - with respect to higher language functions - and offers linguistics a grounding in first principles with respect to computability. We show how both tree-geometric depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression algorithm) can be used to accurately predict legal operations on syntactic workspaces, directly in line with formulations of variational free energy minimization. This is used to motivate a general principle of language design that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of linguists with the normative account of self-organization furnished by the FEP, by marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference
    corecore