3 research outputs found

    M/EEG source localization with multi-scale time-frequency dictionaries

    Get PDF
    International audienceMagnetoencephalography (MEG) and electroen- cephalography (EEG) source localization is an ill-posed problem due to a small number of sensors measuring the brain activity. This results in a non-unique source estimate. To identify an appropriate solution out of an infinite set of possible candidates, the problem requires setting certain constraints depending on the assumptions or a priori knowledge about the source distri- bution. Different constraints have been proposed so far, mainly those that impose sparsity on the source reconstruction in both standard and time-frequency domains. Source localization in the time-frequency domain has already been investigated using Gabor dictionary in both a convex (TF-MxNE) and non-convex way (Iterative Reweighted TF-MxNE). The iterative reweighted (ir)TF-MxNE solver has been shown to outperform TF-MxNE in both source recovery and amplitude bias. However, the choice of an optimal dictionary remains unsolved. Due to a mixture of signals, i.e. short transient signals (right after the stimulus onset) and slower brain waves, the choice of a single dictionary explaining simultaneously both signals types in a sparse way is difficult. In this work, we introduce a method to improve the source estimation relying on a multi-scale dictionary, i.e. multiple dictionaries with different scales concatenated to fit short transients and slow waves at the same time. We compare our results with irTF-MxNE on realistic simulation, then we use somatosensory data to demonstrate the benefits of the approach on in terms of reduced leakage (time courses mixture), temporal smoothness and detection of both signals types

    A novel EEG based linguistic BCI

    Get PDF
    While a human being can think coherently, physical limitations no matter how severe, should never become disabling. Thinking and cognition are performed and expressed through language, which is the most natural form of human communication. The use of covert speech tasks for BCIs has been successfully achieved for invasive and non-invasive systems. In this work, by incorporating the most recent discoveries on the spatial, temporal, and spectral signatures of word production, a novel system is designed, which is custom-build for linguistic tasks. Other than paying attention and waiting for the onset cue, this BCI requires absolutely no cognitive effort from the user and operates using automatic linguistic functions of the brain in the first 312ms post onset, which is also completely out of the control of the user and immune from inconsistencies. With four classes, this online BCI achieves classification accuracy of 82.5%. Each word produces a signature as unique as its phonetic structure, and the number of covert speech tasks used in this work is limited by computational power. We demonstrated that this BCI can successfully use wireless dry electrode EEG systems, which are becoming as capable as traditional laboratory grade systems. This frees the potential user from the confounds of the lab, facilitating real-world application. Considering that the number of words used in daily life does not exceed 2000, the number of words used by this type of novel BCI may indeed reach this number in the future, with no need to change the current system design or experimental protocol. As a promising step towards noninvasive synthetic telepathy, this system has the potential to not only help those in desperate need, but to completely change the way we communicate with our computers in the future as covert speech is much easier than any form of manual communication and control

    The neuro-cognitive representation of word meaning resolved in space and time.

    Get PDF
    One of the core human abilities is that of interpreting symbols. Prompted with a perceptual stimulus devoid of any intrinsic meaning, such as a written word, our brain can access a complex multidimensional representation, called semantic representation, which corresponds to its meaning. Notwithstanding decades of neuropsychological and neuroimaging work on the cognitive and neural substrate of semantic representations, many questions are left unanswered. The research in this dissertation attempts to unravel one of them: are the neural substrates of different components of concrete word meaning dissociated? In the first part, I review the different theoretical positions and empirical findings on the cognitive and neural correlates of semantic representations. I highlight how recent methodological advances, namely the introduction of multivariate methods for the analysis of distributed patterns of brain activity, broaden the set of hypotheses that can be empirically tested. In particular, they allow the exploration of the representational geometries of different brain areas, which is instrumental to the understanding of where and when the various dimensions of the semantic space are activated in the brain. Crucially, I propose an operational distinction between motor-perceptual dimensions (i.e., those attributes of the objects referred to by the words that are perceived through the senses) and conceptual ones (i.e., the information that is built via a complex integration of multiple perceptual features). In the second part, I present the results of the studies I conducted in order to investigate the automaticity of retrieval, topographical organization, and temporal dynamics of motor-perceptual and conceptual dimensions of word meaning. First, I show how the representational spaces retrieved with different behavioral and corpora-based methods (i.e., Semantic Distance Judgment, Semantic Feature Listing, WordNet) appear to be highly correlated and overall consistent within and across subjects. Second, I present the results of four priming experiments suggesting that perceptual dimensions of word meaning (such as implied real world size and sound) are recovered in an automatic but task-dependent way during reading. Third, thanks to a functional magnetic resonance imaging experiment, I show a representational shift along the ventral visual path: from perceptual features, preferentially encoded in primary visual areas, to conceptual ones, preferentially encoded in mid and anterior temporal areas. This result indicates that complementary dimensions of the semantic space are encoded in a distributed yet partially dissociated way across the cortex. Fourth, by means of a study conducted with magnetoencephalography, I present evidence of an early (around 200 ms after stimulus onset) simultaneous access to both motor-perceptual and conceptual dimensions of the semantic space thanks to different aspects of the signal: inter-trial phase coherence appears to be key for the encoding of perceptual while spectral power changes appear to support encoding of conceptual dimensions. These observations suggest that the neural substrates of different components of symbol meaning can be dissociated in terms of localization and of the feature of the signal encoding them, while sharing a similar temporal evolution
    corecore