2,657 research outputs found

    Single-trial analysis of EEG during rapid visual discrimination: enabling cortically-coupled computer vision

    Get PDF
    We describe our work using linear discrimination of multi-channel electroencephalography for single-trial detection of neural signatures of visual recognition events. We demonstrate the approach as a methodology for relating neural variability to response variability, describing studies for response accuracy and response latency during visual target detection. We then show how the approach can be utilized to construct a novel type of brain-computer interface, which we term cortically-coupled computer vision. In this application, a large database of images is triaged using the detected neural signatures. We show how ‘corticaltriaging’ improves image search over a strictly behavioral response

    Neural population coding: combining insights from microscopic and mass signals

    Get PDF
    Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior

    Being first matters: topographical representational similarity analysis of ERP signals reveals separate networks for audiovisual temporal binding depending on the leading sense

    Get PDF
    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Inter-sensory timing is crucial in this process as only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window (TBW), revealing asymmetries in its size and plasticity depending on the leading input (auditory-visual, AV; visual-auditory, VA). We here tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV/VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV/VA event-related potentials (ERPs) from the sum of their unisensory constituents, we run a time-resolved topographical representational similarity analysis (tRSA) comparing AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between AV- and VA-maps at each time point (500ms window post-stimulus) and then correlated with two alternative similarity model matrices: AVmaps=VAmaps vs. AVmaps≠VAmaps. The tRSA results favored the AVmaps≠VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems

    Oscillatory dynamics of perceptual to conceptual transformations in the ventral visual pathway

    Get PDF
    Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. While this process depends on the ventral visual pathway (VVP), we lack an incremental account from low-level inputs to semantic representations, and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics, and test the output of the incremental model against patterns of neural oscillations recorded with MEG in humans. Representational Similarity Analysis showed visual information was represented in alpha activity throughout the VVP, and semantic information was represented in theta activity. Furthermore, informational connectivity showed visual information travels through feedforward connections, while visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics

    Steady-State movement related potentials for brain–computer interfacing

    Get PDF
    An approach for brain-computer interfacing (BCI) by analysis of steady-state movement related potentials (ssMRPs) produced during rhythmic finger movements is proposed in this paper. The neurological background of ssMRPs is briefly reviewed. Averaged ssMRPs represent the development of a lateralized rhythmic potential, and the energy of the EEG signals at the finger tapping frequency can be used for single-trial ssMRP classification. The proposed ssMRP-based BCI approach is tested using the classic Fisher's linear discriminant classifier. Moreover, the influence of the current source density transform on the performance of BCI system is investigated. The averaged correct classification rates (CCRs) as well as averaged information transfer rates (ITRs) for different sliding time windows are reported. Reliable single-trial classification rates of 88%-100% accuracy are achievable at relatively high ITRs. Furthermore, we have been able to achieve CCRs of up to 93% in classification of the ssMRPs recorded during imagined rhythmic finger movements. The merit of this approach is in the application of rhythmic cues for BCI, the relatively simple recording setup, and straightforward computations that make the real-time implementations plausible

    Data Augmentation for Deep-Learning-Based Electroencephalography

    Get PDF
    Background: Data augmentation (DA) has recently been demonstrated to achieve considerable performance gains for deep learning (DL)—increased accuracy and stability and reduced overfitting. Some electroencephalography (EEG) tasks suffer from low samples-to-features ratio, severely reducing DL effectiveness. DA with DL thus holds transformative promise for EEG processing, possibly like DL revolutionized computer vision, etc. New method: We review trends and approaches to DA for DL in EEG to address: Which DA approaches exist and are common for which EEG tasks? What input features are used? And, what kind of accuracy gain can be expected? Results: DA for DL on EEG begun 5 years ago and is steadily used more. We grouped DA techniques (noise addition, generative adversarial networks, sliding windows, sampling, Fourier transform, recombination of segmentation, and others) and EEG tasks (into seizure detection, sleep stages, motor imagery, mental workload, emotion recognition, motor tasks, and visual tasks). DA efficacy across techniques varied considerably. Noise addition and sliding windows provided the highest accuracy boost; mental workload most benefitted from DA. Sliding window, noise addition, and sampling methods most common for seizure detection, mental workload, and sleep stages, respectively. Comparing with existing methods: Percent of decoding accuracy explained by DA beyond unaugmented accuracy varied between 8% for recombination of segmentation and 36% for noise addition and from 14% for motor imagery to 56% for mental workload—29% on average. Conclusions: DA increasingly used and considerably improved DL decoding accuracy on EEG. Additional publications—if adhering to our reporting guidelines—will facilitate more detailed analysis
    • …
    corecore