4,295 research outputs found

    Interaction Paradigms for Brain-Body Interfaces for Computer Users with Brain Injuries

    Get PDF
    In comparison to all types of injury, those to the brain are among the most likely to result in death or permanent disability. Some of these brain-injured people cannot communicate, recreate, or control their environment due to severe motor impairment. This group of individuals with severe head injury have received limited help from assistive technology. Brain-Computer Interfaces have opened up a spectrum of assistive technologies, which are particularly appropriate for people with traumatic brain injury, especially those who suffer from “locked-in” syndrome. The research challenge here is to develop novel interaction paradigms that suit brain-injured individuals, who could then use it for everyday communications. The developed interaction paradigms should require minimum training, reconfigurable and minimum effort to use. This thesis reports on the development of novel interaction paradigms for Brain-Body Interfaces to help brain-injured people to communicate better, recreate and control their environment using computers despite the severity of their brain injury. The investigation was carried out in three phases. Phase one was an exploratory study where a first novel interaction paradigm was developed and evaluated with able-bodied and disabled participants. Results obtained were fed into the next phase of the investigation. Phase two was carried out with able participants who acted as development group for the second novel interaction paradigm. This second novel interaction paradigm was evaluated with non-verbal participants with severe brain injury in phase three. An iterative design research methodology was chosen to develop the interaction paradigms. A non-invasive assistive technology device named Cyberlink™ was chosen as the Brain-Body Interface. This research improved previous work in this area by developing new interaction paradigms of personalised tiling and discrete acceleration in Brain- Body Interfaces. The research hypothesis of this study ‘that the performance of the Brain-Body Interface can be improved by the use of novel interaction paradigms’ was successfully demonstrated

    Uncovering Multisensory Processing through Non-Invasive Brain Stimulation

    Get PDF
    Most of current knowledge about the mechanisms of multisensory integration of environmental stimuli by the human brain derives from neuroimaging experiments. However, neuroimaging studies do not always provide conclusive evidence about the causal role of a given area for multisensory interactions, since these techniques can mainly derive correlations between brain activations and behavior. Conversely, techniques of non-invasive brain stimulation (NIBS) represent a unique and powerful approach to inform models of causal relations between specific brain regions and individual cognitive and perceptual functions. Although NIBS has been widely used in cognitive neuroscience, its use in the study of multisensory processing in the human brain appears a quite novel field of research. In this paper, we review and discuss recent studies that have used two techniques of NIBS, namely transcranial magnetic stimulation and transcranial direct current stimulation, for investigating the causal involvement of unisensory and heteromodal cortical areas in multisensory processing, the effects of multisensory cues on cortical excitability in unisensory areas, and the putative functional connections among different cortical areas subserving multisensory interactions. The emerging view is that NIBS is an essential tool available to neuroscientists seeking for causal relationships between a given area or network and multisensory processes. With its already large and fast increasing usage, future work using NIBS in isolation, as well as in conjunction with different neuroimaging techniques, could substantially improve our understanding of multisensory processing in the human brain

    Speech Processes for Brain-Computer Interfaces

    Get PDF
    Speech interfaces have become widely used and are integrated in many applications and devices. However, speech interfaces require the user to produce intelligible speech, which might be hindered by loud environments, concern to bother bystanders or the general in- ability to produce speech due to disabilities. Decoding a usera s imagined speech instead of actual speech would solve this problem. Such a Brain-Computer Interface (BCI) based on imagined speech would enable fast and natural communication without the need to actually speak out loud. These interfaces could provide a voice to otherwise mute people. This dissertation investigates BCIs based on speech processes using functional Near In- frared Spectroscopy (fNIRS) and Electrocorticography (ECoG), two brain activity imaging modalities on opposing ends of an invasiveness scale. Brain activity data have low signal- to-noise ratio and complex spatio-temporal and spectral coherence. To analyze these data, techniques from the areas of machine learning, neuroscience and Automatic Speech Recog- nition are combined in this dissertation to facilitate robust classification of detailed speech processes while simultaneously illustrating the underlying neural processes. fNIRS is an imaging modality based on cerebral blood flow. It only requires affordable hardware and can be set up within minutes in a day-to-day environment. Therefore, it is ideally suited for convenient user interfaces. However, the hemodynamic processes measured by fNIRS are slow in nature and the technology therefore offers poor temporal resolution. We investigate speech in fNIRS and demonstrate classification of speech processes for BCIs based on fNIRS. ECoG provides ideal signal properties by invasively measuring electrical potentials artifact- free directly on the brain surface. High spatial resolution and temporal resolution down to millisecond sampling provide localized information with accurate enough timing to capture the fast process underlying speech production. This dissertation presents the Brain-to- Text system, which harnesses automatic speech recognition technology to decode a textual representation of continuous speech from ECoG. This could allow to compose messages or to issue commands through a BCI. While the decoding of a textual representation is unparalleled for device control and typing, direct communication is even more natural if the full expressive power of speech - including emphasis and prosody - could be provided. For this purpose, a second system is presented, which directly synthesizes neural signals into audible speech, which could enable conversation with friends and family through a BCI. Up to now, both systems, the Brain-to-Text and synthesis system are operating on audibly produced speech. To bridge the gap to the final frontier of neural prostheses based on imagined speech processes, we investigate the differences between audibly produced and imagined speech and present first results towards BCI from imagined speech processes. This dissertation demonstrates the usage of speech processes as a paradigm for BCI for the first time. Speech processes offer a fast and natural interaction paradigm which will help patients and healthy users alike to communicate with computers and with friends and family efficiently through BCIs

    Error Signals from the Brain: 7th Mismatch Negativity Conference

    Get PDF
    The 7th Mismatch Negativity Conference presents the state of the art in methods, theory, and application (basic and clinical research) of the MMN (and related error signals of the brain). Moreover, there will be two pre-conference workshops: one on the design of MMN studies and the analysis and interpretation of MMN data, and one on the visual MMN (with 20 presentations). There will be more than 40 presentations on hot topics of MMN grouped into thirteen symposia, and about 130 poster presentations. Keynote lectures by Kimmo Alho, Angela D. Friederici, and Israel Nelken will round off the program by covering topics related to and beyond MMN

    Guidelines for the recording and evaluation of pharmaco-EEG data in man: the International Pharmaco-EEG Society (IPEG)

    Get PDF
    The International Pharmaco-EEG Society (IPEG) presents updated guidelines summarising the requirements for the recording and computerised evaluation of pharmaco-EEG data in man. Since the publication of the first pharmaco-EEG guidelines in 1982, technical and data processing methods have advanced steadily, thus enhancing data quality and expanding the palette of tools available to investigate the action of drugs on the central nervous system (CNS), determine the pharmacokinetic and pharmacodynamic properties of novel therapeutics and evaluate the CNS penetration or toxicity of compounds. However, a review of the literature reveals inconsistent operating procedures from one study to another. While this fact does not invalidate results per se, the lack of standardisation constitutes a regrettable shortcoming, especially in the context of drug development programmes. Moreover, this shortcoming hampers reliable comparisons between outcomes of studies from different laboratories and hence also prevents pooling of data which is a requirement for sufficiently powering the validation of novel analytical algorithms and EEG-based biomarkers. The present updated guidelines reflect the consensus of a global panel of EEG experts and are intended to assist investigators using pharmaco-EEG in clinical research, by providing clear and concise recommendations and thereby enabling standardisation of methodology and facilitating comparability of data across laboratories

    Identifying Individual Differences in the Neural Correlates of Language Processing Using fMRI

    Get PDF
    Mapping language functions in the brain is of profound theoretical and clinical interest. The aim of the current Ph.D. project was to develop an fMRI paradigm to assesses different language processes (i.e., phonological, semantic, sentence processing) and modalities (listening, reading, repetition) in a stimulus-driven manner, keeping non-linguistic task demands to a minimum. Cortical activations and functional connectivity patterns were largely in line with previous research, validating the suitability of the paradigm for localizing different language processes. The first empirical chapter of the thesis investigated sentence comprehension in listening and reading, which elicited largely overlapping activations for the two modalities and for semantic and syntactic integration in the left anterior temporal lobe (ATL). Functional connectivity of the left ATL with other parts of the cortical language network differed between the modalities and processes. The second empirical chapter explored individual differences in brain activity in relation to verbal ability. Results supported the notion of more extended as well as stronger activations during language processing in individuals with higher verbal ability, possibly reflecting enhanced processing. The third empirical chapter further investigated individual differences in brain activity, focusing on lateralization in activity as a fundamental principle of how language processing is functionally organized in the brain. Degrees of left-lateralization differed significantly between language processes and were positively related to behaviorally assessed language lateralization. Furthermore, the results provided new evidence supporting a positive relationship between left-lateralization and verbal ability. The thesis concludes with a discussion of the significance of the results with regard to general principles of brain functioning and outlines potential clinical implications

    Functional MRI investigations of cortical mechanisms of auditory spatial attention

    Full text link
    In everyday settings, spatial attention helps listeners isolate and understand individual sound sources. However, the neural mechanisms of auditory spatial attention (ASpA) are only partially understood. This thesis uses within-subject analysis of functional magnetic resonance imaging (fMRI) data to address fundamental questions regarding cortical mechanisms supporting ASpA by applying novel multi-voxel pattern analysis (MVPA) and resting-state functional connectivity (rsFC) approaches. A series of fMRI studies of ASpA were conducted in which subjects performed a one-back task in which they attended to one of two spatially separated streams. Attention modulated blood oxygenation level-dependent (BOLD) activity in multiple areas in the prefrontal, temporal, and parietal cortex, including non-visuotopic intraparietal sulcus (IPS), but not the visuotopic maps in IPS. No spatial bias was detected in any cortical area using standard univariate analysis; however, MVPA revealed that activation patterns in a number of areas, including the auditory cortex, predicted the attended direction. Furthermore, we explored how cognitive task demands and the sensory modality of the inputs influenced activity with a visual one-back task and a visual multiple object tracking (MOT) task. Activity from the visual and auditory one-back tasks overlapped along the fundus of IPS and lateral prefrontal cortex (lPFC). However, there was minimal overlap of activity in the lPFC between the visual MOT task and the two one-back tasks. Finally, we endeavored to identify visual and auditory networks using rsFC. We identified a dorsal visual attention network reliably within individual subjects using visuotopic seeds. Using auditory seeds, we found a prefrontal area nested between segments of the dorsal visual attention network. These findings mark fundamental progress towards elucidating the cortical network controlling ASpA. Our results suggest that similar lPFC structures support both ASpA and its visual counterpart during a spatial one-back task, but that ASpA does not drive visuotopic IPS in the parietal cortex. Furthermore, rsFC reveals that visual and auditory seed regions are functionally connected with non-overlapping lPFC regions, possibly reflecting spatial and temporal cognitive processing biases, respectively. While we find no evidence for a spatiotopic map, the auditory cortex is sensitive to direction of attention in its patterns of activation
    corecore