44 research outputs found

    Brain-Computer Interface

    Get PDF
    Brain-computer interfacing (BCI) with the use of advanced artificial intelligence identification is a rapidly growing new technology that allows a silently commanding brain to manipulate devices ranging from smartphones to advanced articulated robotic arms when physical control is not possible. BCI can be viewed as a collaboration between the brain and a device via the direct passage of electrical signals from neurons to an external system. The book provides a comprehensive summary of conventional and novel methods for processing brain signals. The chapters cover a range of topics including noninvasive and invasive signal acquisition, signal processing methods, deep learning approaches, and implementation of BCI in experimental problems

    Towards improved visual stimulus discrimination in an SSVEP BCI

    Get PDF
    The dissertation investigated the influence of stimulus characteristics, electroencephalographic (EEG) electrode location and three signal processing methods on the spectral signal to noise ratio (SNR) of Steady State Visual Evoked Potentials (SSVEPs) with a view for use in Brain-Computer Interfaces (BCIs). It was hypothesised that the new spectral baseline processing method introduced here, termed the 'activity baseline', would result in an improved SNR

    Rapid processing of neutral and angry expressions within ongoing facial stimulus streams: Is it all about isolated facial features?

    Get PDF
    Our visual system extracts the emotional meaning of human facial expressions rapidly and automatically. Novel paradigms using fast periodic stimulations have provided insights into the electrophysiological processes underlying emotional content extraction: the regular occurrence of specific identities and/or emotional expressions alone can drive diagnostic brain responses. Consistent with a processing advantage for social cues of threat, we expected angry facial expressions to drive larger responses than neutral expressions. In a series of four EEG experiments, we studied the potential boundary conditions of such an effect: (i) we piloted emotional cue extraction using 9 facial identities and a fast presentation rate of 15 Hz (N = 16); (ii) we reduced the facial identities from 9 to 2, to assess whether (low or high) variability across emotional expressions would modulate brain responses (N = 16); (iii) we slowed the presentation rate from 15 Hz to 6 Hz (N = 31), the optimal presentation rate for facial feature extraction; (iv) we tested whether passive viewing instead of a concurrent task at fixation would play a role (N = 30). We consistently observed neural responses reflecting the rate of regularly presented emotional expressions (5 Hz and 2 Hz at presentation rates of 15 Hz and 6 Hz, respectively). Intriguingly, neutral expressions consistently produced stronger responses than angry expressions, contrary to the predicted processing advantage for threat-related stimuli. Our findings highlight the influence of physical differences across facial identities and emotional expressions

    Functional Source Separation for EEG-fMRI Fusion: Application to Steady-State Visual Evoked Potentials

    Get PDF
    Neurorobotics is one of the most ambitious fields in robotics, driving integration of interdisciplinary data and knowledge. One of the most productive areas of interdisciplinary research in this area has been the implementation of biologically-inspired mechanisms in the development of autonomous systems. Specifically, enabling such systems to display adaptive behavior such as learning from good and bad outcomes, has been achieved by quantifying and understanding the neural mechanisms of the brain networks mediating adaptive behaviors in humans and animals. For example, associative learning from aversive or dangerous outcomes is crucial for an autonomous system, to avoid dangerous situations in the future. A body of neuroscience research has suggested that the neurocomputations in the human brain during associative learning involve re-shaping of sensory responses. The nature of these adaptive changes in sensory processing during learning however are not yet well enough understood to be readily implemented into on-board algorithms for robotics application. Toward this overall goal, we record the simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI), characterizing one candidate mechanism, i.e., large-scale brain oscillations. The present report examines the use of Functional Source Separation (FSS) as an optimization step in EEG-fMRI fusion that harnesses timing information to constrain the solutions that satisfy physiological assumptions. We applied this approach to the voxel-wise correlation of steady-state visual evoked potential (ssVEP) amplitude and blood oxygen level-dependent imaging (BOLD), across both time series. The results showed the benefit of FSS for the extraction of robust ssVEP signals during simultaneous EEG-fMRI recordings. Applied to data from a 3-phase aversive conditioning paradigm, the correlation maps across the three phases (habituation, acquisition, extinction) show converging results, notably major overlapping areas in both primary and extended visual cortical regions, including calcarine sulcus, lingual cortex, and cuneus. In addition, during the acquisition phase when aversive learning occurs, we observed additional correlations between ssVEP and BOLD in the anterior cingulate cortex (ACC) as well as the precuneus and superior temporal gyrus

    Audio-visual synchrony and feature-selective attention co-amplify early visual processing

    Get PDF
    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space
    corecore