11 research outputs found

    Neuroimaging Research: From Null-Hypothesis Falsification to Out-of-sample Generalization

    Get PDF
    International audienceBrain imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands variables per brain scan were routinely tackled by independent statistical tests on each voxel. This circumvented the curse of dimensionality in exchange for neurobiologically imperfect observation units, a challenging multiple comparisons problem, and limited scaling to currently growing data repositories. Yet, the always-bigger information granularity of neuroimaging data repositories has lunched a rapidly increasing adoption of statistical learning algorithms. These scale naturally to high-dimensional data, extract models from data rather than prespecifying them, and are empirically evaluated for extrapolation to unseen data. The present paper portrays commonalities and differences between long-standing classical inference and upcoming generalization inference relevant for conducting neuroimaging research

    RFNet: Riemannian Fusion Network for EEG-based Brain-Computer Interfaces

    Full text link
    This paper presents the novel Riemannian Fusion Network (RFNet), a deep neural architecture for learning spatial and temporal information from Electroencephalogram (EEG) for a number of different EEG-based Brain Computer Interface (BCI) tasks and applications. The spatial information relies on Spatial Covariance Matrices (SCM) of multi-channel EEG, whose space form a Riemannian Manifold due to the Symmetric and Positive Definite structure. We exploit a Riemannian approach to map spatial information onto feature vectors in Euclidean space. The temporal information characterized by features based on differential entropy and logarithm power spectrum density is extracted from different windows through time. Our network then learns the temporal information by employing a deep long short-term memory network with a soft attention mechanism. The output of the attention mechanism is used as the temporal feature vector. To effectively fuse spatial and temporal information, we use an effective fusion strategy, which learns attention weights applied to embedding-specific features for decision making. We evaluate our proposed framework on four public datasets from three popular fields of BCI, notably emotion recognition, vigilance estimation, and motor imagery classification, containing various types of tasks such as binary classification, multi-class classification, and regression. RFNet approaches the state-of-the-art on one dataset (SEED) and outperforms other methods on the other three datasets (SEED-VIG, BCI-IV 2A, and BCI-IV 2B), setting new state-of-the-art values and showing the robustness of our framework in EEG representation learning

    Sparsity enables estimation of both subcortical and cortical activity from MEG and EEG

    Get PDF
    Subcortical structures play a critical role in brain function. However, options for assessing electrophysiological activity in these structures are limited. Electromagnetic fields generated by neuronal activity in subcortical structures can be recorded noninvasively, using magnetoencephalography (MEG) and electroencephalography (EEG). However, these subcortical signals are much weaker than those generated by cortical activity. In addition, we show here that it is difficult to resolve subcortical sources because distributed cortical activity can explain the MEG and EEG patterns generated by deep sources. We then demonstrate that if the cortical activity is spatially sparse, both cortical and subcortical sources can be resolved with M/EEG. Building on this insight, we develop a hierarchical sparse inverse solution for M/EEG. We assess the performance of this algorithm on realistic simulations and auditory evoked response data, and show that thalamic and brainstem sources can be correctly estimated in the presence of cortical activity. Our work provides alternative perspectives and tools for characterizing electrophysiological activity in subcortical structures in the human brain

    Open science in psychophysiology: An overview of challenges and emerging solutions

    Full text link
    The present review is the result of a one-day workshop on open science, held at the Annual Meeting of the Society for Psychophysiological Research in Washington, DC, September 2019. The contributors represent psychophysiological researchers at different career stages and from a wide spectrum of institutions. The state of open science in psychophysiology is discussed from different perspectives, highlighting key challenges, potential benefits, and emerging solutions that are intended to facilitate open science practices. Three domains are emphasized: data sharing, preregistration, and multi-site studies. In the context of these broader domains, we present potential implementations of specific open science procedures such as data format harmonization, power analysis, data, presentation code and analysis pipeline sharing, suitable for psychophysiological research. Practical steps are discussed that may be taken to facilitate the adoption of open science practices in psychophysiology. These steps include (1) promoting broad and accessible training in the skills needed to implement open science practices, such as collaborative research and computational reproducibility initiatives, (2) establishing mechanisms that provide practical assistance in sharing of processing pipelines, presentation code, and data in an efficient way, and (3) improving the incentive structure for open science approaches. Throughout the manuscript, we provide references and links to available resources for those interested in adopting open science practices in their research. © 2021This work was supported by grants from the National Institutes of Health R01MH097320 and R01 MH112558 to AK

    Simple acoustic features can explain phoneme-based predictions of cortical responses to speech

    Get PDF
    When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena

    Time-Lagged Multidimensional Pattern Connectivity (TL-MDPC): An EEG/MEG Pattern Transformation Based Functional Connectivity Metric

    Get PDF
    Functional and effective connectivity methods are essential to study the complex information flow in brain networks underlying human cognition. Only recently have connectivity methods begun to emerge that make use of the full multidimensional information contained in patterns of brain activation, rather than unidimensional summary measures of these patterns. To date, these methods have mostly been applied to fMRI data, and no method allows vertex-to-vertex transformations with the temporal specificity of EEG/MEG data. Here, we introduce time-lagged multidimensional pattern connectivity (TL-MDPC) as a novel bivariate functional connectivity metric for EEG/MEG research. TL-MDPC estimates the vertex-to-vertex transformations among multiple brain regions and across different latency ranges. It determines how well patterns in ROI at time point can linearly predict patterns of ROI at time point . In the present study, we use simulations to demonstrate TL-MDPC's increased sensitivity to multidimensional effects compared to a unidimensional approach across realistic choices of number of trials and signal-to-noise ratios. We applied TL-MDPC, as well as its unidimensional counterpart, to an existing dataset varying the depth of semantic processing of visually presented words by contrasting a semantic decision and a lexical decision task. TL-MDPC detected significant effects beginning very early on, and showed stronger task modulations than the unidimensional approach, suggesting that it is capable of capturing more information. With TL-MDPC only, we observed rich connectivity between core semantic representation (left and right anterior temporal lobes) and semantic control (inferior frontal gyrus and posterior temporal cortex) areas with greater semantic demands. TL-MDPC is a promising approach to identify multidimensional connectivity patterns, typically missed by unidimensional approaches

    Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.

    No full text
    Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain–computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data.We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals
    corecore