30 research outputs found

    Speech Processes for Brain-Computer Interfaces

    Get PDF
    Speech interfaces have become widely used and are integrated in many applications and devices. However, speech interfaces require the user to produce intelligible speech, which might be hindered by loud environments, concern to bother bystanders or the general in- ability to produce speech due to disabilities. Decoding a usera s imagined speech instead of actual speech would solve this problem. Such a Brain-Computer Interface (BCI) based on imagined speech would enable fast and natural communication without the need to actually speak out loud. These interfaces could provide a voice to otherwise mute people. This dissertation investigates BCIs based on speech processes using functional Near In- frared Spectroscopy (fNIRS) and Electrocorticography (ECoG), two brain activity imaging modalities on opposing ends of an invasiveness scale. Brain activity data have low signal- to-noise ratio and complex spatio-temporal and spectral coherence. To analyze these data, techniques from the areas of machine learning, neuroscience and Automatic Speech Recog- nition are combined in this dissertation to facilitate robust classification of detailed speech processes while simultaneously illustrating the underlying neural processes. fNIRS is an imaging modality based on cerebral blood flow. It only requires affordable hardware and can be set up within minutes in a day-to-day environment. Therefore, it is ideally suited for convenient user interfaces. However, the hemodynamic processes measured by fNIRS are slow in nature and the technology therefore offers poor temporal resolution. We investigate speech in fNIRS and demonstrate classification of speech processes for BCIs based on fNIRS. ECoG provides ideal signal properties by invasively measuring electrical potentials artifact- free directly on the brain surface. High spatial resolution and temporal resolution down to millisecond sampling provide localized information with accurate enough timing to capture the fast process underlying speech production. This dissertation presents the Brain-to- Text system, which harnesses automatic speech recognition technology to decode a textual representation of continuous speech from ECoG. This could allow to compose messages or to issue commands through a BCI. While the decoding of a textual representation is unparalleled for device control and typing, direct communication is even more natural if the full expressive power of speech - including emphasis and prosody - could be provided. For this purpose, a second system is presented, which directly synthesizes neural signals into audible speech, which could enable conversation with friends and family through a BCI. Up to now, both systems, the Brain-to-Text and synthesis system are operating on audibly produced speech. To bridge the gap to the final frontier of neural prostheses based on imagined speech processes, we investigate the differences between audibly produced and imagined speech and present first results towards BCI from imagined speech processes. This dissertation demonstrates the usage of speech processes as a paradigm for BCI for the first time. Speech processes offer a fast and natural interaction paradigm which will help patients and healthy users alike to communicate with computers and with friends and family efficiently through BCIs

    Low-frequency pressure wave propagation in liquid-filled, flexible tubes. (A)

    Get PDF

    AmĂ©liorer les interactions homme-machine et la prĂ©sence sociale avec l’informatique physiologique

    Get PDF
    This thesis explores how physiological computing can contribute to human-computer interaction (HCI) and foster new communication channels among the general public. We investigated how physiological sensors, such as electroencephalography (EEG), could be employed to assess the mental state of the users and how they relate to other evaluation methods. We created the first brain-computer interface that could sense visual comfort during the viewing of stereoscopic images and shaped a framework that could help to assess the over all user experience by monitoring workload, attention and error recognition.To lower the barrier between end users and physiological sensors,we participated in the software integration of a low-cost and open hardware EEG device; used off-the shelf webcams to measure heart rate remotely, crafted we arables that can quickly equip users so that electrocardiography, electrodermal activity or EEG may be measured during public exhibitions. We envisioned new usages for our sensors, that would increase social presence. In a study about human-agent interaction, participants tended to prefer virtual avatars that were mirroring their own internal state. A follow-up study focused on interactions between users to describe how physiological monitoringcould alter our relationships. Advances in HCI enabled us to seam lesslyintegrate biofeedback to the physical world. We developped Teegi, apuppet that lets novices discover by themselves about their brain activity. Finally, with Tobe, a toolkit that encompasses more sensors and give more freedom about their visualizations, we explored how such proxy shifts our representations, about our selves as well as about the others.Cette thĂšse explore comment l’informatique physiologique peut contribuer aux interactions homme-machine (IHM) et encourager l’apparition de nouveaux canaux de communication parmi le grand public. Nous avons examinĂ© comment des capteurs physiologiques,tels que l’électroencĂ©phalographie (EEG), pourraient ĂȘtre utilisĂ©s afin d’estimer l’état mental des utilisateurs et comment ils se positionnent par rapport Ă  d’autres mĂ©thodes d’évaluation. Nous avons crĂ©Ă© la premiĂšre interface cerveau-ordinateur capable de discriminer le confort visuel pendant le visionnage d’images stĂ©rĂ©oscopiques et nous avons esquissĂ© un systĂšme qui peux aider Ă  estimer l’expĂ©rience utilisateur dans son ensemble, en mesurant charge mentale, attention et reconnaissance d’erreur. Pour abaisser la barriĂšre entre utilisateurs finaux et capteurs physiologiques, nous avons participĂ© Ă  l’intĂ©gration logicielle d’un appareil EEG bon marchĂ© et libre, nous avons utilisĂ© des webcams du commerce pour mesurer le rythme cardiaque Ă  distance, nous avons confectionnĂ© des wearables dont les utilisateurs peuvent rapidement s’équiper afin qu’électrocardiographie, activitĂ© Ă©lectrodermale et EEG puissent ĂȘtre mesurĂ©es lors de manifestations publiques. Nous avons imaginĂ© de nouveaux usages pour nos capteurs, qui augmenteraient la prĂ©sence sociale. Dans une Ă©tude autour de l’interaction humain agent,les participants avaient tendance Ă  prĂ©fĂ©rer les avatars virtuels rĂ©pliquant leurs propres Ă©tats internes. Une Ă©tude ultĂ©rieure s’est concentrĂ©e sur l’interaction entre utilisateurs, profitant d’un jeu de plateau pour dĂ©crire comment l’examen de la physiologie pourrait changer nos rapports. Des avancĂ©es en IHM ont permis d’intĂ©grer de maniĂšre transparente du biofeedback au monde physique. Nous avons dĂ©veloppĂ© Teegi, une poupĂ©e qui permet aux novices d’en dĂ©couvrir plus sur leur activitĂ© cĂ©rĂ©brale, par eux-mĂȘmes. Enfin avec Tobe, un toolkit qui comprend plus de capteurs et donne plus de libertĂ© quant Ă  leurs visualisations, nous avons explorĂ© comment un tel proxy dĂ©calenos reprĂ©sentations, tant de nous-mĂȘmes que des autres
    corecore