25 research outputs found

    Psychophysical Responses Comparison in Spatial Visual, Audiovisual, and Auditory BCI-Spelling Paradigms

    Full text link
    The paper presents a pilot study conducted with spatial visual, audiovisual and auditory brain-computer-interface (BCI) based speller paradigms. The psychophysical experiments are conducted with healthy subjects in order to evaluate a difficulty and a possible response accuracy variability. We also present preliminary EEG results in offline BCI mode. The obtained results validate a thesis, that spatial auditory only paradigm performs as good as the traditional visual and audiovisual speller BCI tasks.Comment: The 6th International Conference on Soft Computing and Intelligent Systems and The 13th International Symposium on Advanced Intelligent Systems, 201

    Head-related Impulse Response Cues for Spatial Auditory Brain-computer Interface

    Full text link
    This study provides a comprehensive test of a head-related impulse response (HRIR) cues for a spatial auditory brain-computer interface (saBCI) speller paradigm. We present a comparison with the conventional virtual sound headphone-based spatial auditory modality. We propose and optimize the three types of sound spatialization settings using a variable elevation in order to evaluate the HRIR efficacy for the saBCI. Three experienced and seven naive BCI users participated in the three experimental setups based on ten presented Japanese syllables. The obtained EEG auditory evoked potentials (AEP) resulted with encouragingly good and stable P300 responses in online BCI experiments. Our case study indicated that users could perceive elevation in the saBCI experiments generated using the HRIR measured from a general head model. The saBCI accuracy and information transfer rate (ITR) scores have been improved comparing to the classical horizontal plane-based virtual spatial sound reproduction modality, as far as the healthy users in the current pilot study are concerned.Comment: 4 pages, 4 figures, accepted for EMBC 2015, IEEE copyrigh

    Novel Virtual Moving Sound-based Spatial Auditory Brain-Computer Interface Paradigm

    Full text link
    This paper reports on a study in which a novel virtual moving sound-based spatial auditory brain-computer interface (BCI) paradigm is developed. Classic auditory BCIs rely on spatially static stimuli, which are often boring and difficult to perceive when subjects have non-uniform spatial hearing perception characteristics. The concept of moving sound proposed and tested in the paper allows for the creation of a P300 oddball paradigm of necessary target and non-target auditory stimuli, which are more interesting and easier to distinguish. We present a report of our study of seven healthy subjects, which proves the concept of moving sound stimuli usability for a novel BCI. We compare online BCI classification results in static and moving sound paradigms yielding similar accuracy results. The subject preference reports suggest that the proposed moving sound protocol is more comfortable and easier to discriminate with the online BCI.Comment: 4 pages (in conference proceedings original version); 6 figures, accepted at 6th International IEEE EMBS Conference on Neural Engineering, November 6-8, 2013, Sheraton San Diego Hotel & Marina, San Diego, CA; paper ID 465; to be available at IEEE Xplore; IEEE Copyright 201

    A Novel Audiovisual P300-Speller Paradigm Based on Cross-Modal Spatial and Semantic Congruence

    Get PDF
    Objective: Although many studies have attempted to improve the performance of the visual-based P300-speller system, its performance is still not satisfactory. The current system has limitations for patients with neurodegenerative diseases, in which muscular control of the eyes may be impaired or deteriorate over time. Some studies have shown that the audiovisual stimuli with spatial and semantic congruence elicited larger event-related potential (ERP) amplitudes than do unimodal visual stimuli. Therefore, this study proposed a novel multisensory P300-speller based on audiovisual spatial and semantic congruence. Methods: We designed a novel audiovisual P300-speller paradigm (AV spelling paradigm) in which the pronunciation and visual presentation of characters were matched in spatial position and semantics. We analyzed the ERP waveforms elicited in the AV spelling paradigm and visual-based spelling paradigm (V spelling paradigm) and compared the classification accuracies between these two paradigms. Results: ERP analysis revealed significant differences in ERP amplitudes between the two paradigms in the following areas (AV \u3e V): the frontal area at 60–140 ms, frontal–central–parietal area at 360–460 ms, frontal area at 700–800 ms, right temporal area at 380–480 and 700–780 ms, and left temporal area at 500–780 ms. Offline classification results showed that the accuracies were significantly higher in the AV spelling paradigm than in the V spelling paradigm after superposing 1, 2, 5, 6, 9, and 10 times (P \u3c 0.05), and there were trends toward improvement in the accuracies at superposing 3, 4, 7, and 8 times (P = 0.06). Similar results were found for information transfer rate between V and AV spelling paradigms at 1, 2, 5, 6, and 10 superposition times (P \u3c 0.05). Significance: The proposed audiovisual P300-speller paradigm significantly improved the classification accuracies compared with the visual-based P300-speller paradigm. Our novel paradigm combines spatial and semantic features of two sensory modalities, and the present findings provide valuable insights into the development of multimodal ERP-based BCI paradigms

    Towards the recognition of the emotions of people with visual disabilities through brain-computer interfaces

    Get PDF
    This article belongs to the Section Intelligent Sensors.A brain&-computer interface is an alternative for communication between people and computers, through the acquisition and analysis of brain signals. Research related to this field has focused on serving people with different types of motor, visual or auditory disabilities. On the other hand, affective computing studies and extracts information about the emotional state of a person in certain situations, an important aspect for the interaction between people and the computer. In particular, this manuscript considers people with visual disabilities and their need for personalized systems that prioritize their disability and the degree that affects them. In this article, a review of the state of the techniques is presented, where the importance of the study of the emotions of people with visual disabilities, and the possibility of representing those emotions through a brain&-computer interface and affective computing, are discussed. Finally, the authors propose a framework to study and evaluate the possibility of representing and interpreting the emotions of people with visual disabilities for improving their experience with the use of technology and their integration into today's society.This work was supported by the Consejo Nacional de Ciencia y Tecnología CONACyT, through the number 709656 and by the Research Program of the Ministry of Economy and Competitiveness—Government of Spain, (DeepEMR project TIN2017-87548-C2-1-R)

    Using novel stimuli and alternative signal processing techniques to enhance BCI paradigms

    Get PDF
    A Brain-Computer Interface (BCI) is a device that uses the brain activity of a person as an input to select desired outputs on a computer. BCIs that use surface electroencephalogram (EEG) recordings as their input are the least invasive but also suffer from a very low signal-to-noise ratio (SNR) due to the very low amplitude of the person’s brain activity and the presence of many signal artefacts and background noise. This can be compensated for by subjecting the signals to extensive signal processing, and by using stimuli to trigger a large but consistent change in the signal – these changes are called evoked potentials. The method used to stimulate the evoked potential, and introduce an element of conscious selection in order to allow the user’s intent to modify the evoked potential produced, is called the BCI paradigm. However, even with these additions the performance of BCIs used for assistive communication and control is still significantly below that of other assistive solutions, such as keypads or eye-tracking devices. This thesis examines the paradigm and signal processing components of BCIs and puts forward several methods meant to enhance BCIs’ performance and efficiency. Firstly, two novel signal processing methods based on Empirical Mode Decomposition (EMD) were developed and evaluated. EMD is a technique that divides any oscillating signal into groups of frequency harmonics, called Intrinsic Mode Functions (IMFs). Furthermore, by using Takens’ theorem, a single channel of EEG can be converted into a multi-temporal channel signal by transforming the channel into multiple snapshots of its signal content in time using a series of delay vectors. This signal can then be decomposed into IMFs using a multi-channel variation of EMD, called Multi-variate EMD (MEMD), which uses the spatial information from the signal’s neighbouring channels to inform its decomposition. In the case of a multi-temporal channel signal, this allows the temporal dynamics of the signal to be incorporated into the IMFs. This is called Temporal MEMD (T-MEMD). The second signal processing method based on EMD decomposed both the spatial and temporal channels simultaneously, allowing both spatial and temporal dynamics to be incorporated into the resulting IMFs. This is called Spatio-temporal MEMD (ST-MEMD). Both methods were applied to a large pre-recorded Motor Imagery BCI dataset along with EMD and MEMD for comparison. These results were also compared to those from other studies in the literature that had used the same dataset. T-MEMD performed with an average classification accuracy of 70.2%, performing on a par with EMD that had an average classification accuracy of 68.9%. Both ST-MEMD and MEMD outperformed them with ST-MEMD having an average classification accuracy of 73.6%, and MEMD having an average classification accuracy of 75.3%. The methods containing spatial dynamics, i.e. MEMD and ST-MEMD, outperformed those with only temporal dynamics, i.e. EMD and T-MEMD. The two methods with temporal dynamics each performed on a par with the non-temporal method that had the same level of spatial dynamics. This shows that only the presence of spatial dynamics resulted in a performance increase. This was concluded to be because the differences between the classes of motor-imagery are inherently spatial in nature, not temporal. Next a novel BCI paradigm was developed based on the standard Steady-state Somatosensory Evoked Potential (SSSEP) BCI paradigm. This paradigm uses a tactile stimulus applied to the skin at a certain frequency, generating a resonance signal in the brain’s activity. If two stimuli of different frequency are applied, two resonance signals will be present. However, if the user attends one stimulus over the other, its corresponding SSSEP will increase in amplitude. Unfortunately these changes in amplitude can be very minute. To counter this, a stimulus amplitude and frequency of the vibrotactile stimuli. It was hypothesised that if the stimuli generator was constructed that could alter the were of the same frequency, but one’s amplitude was just below the user’s conscious level of perception and the other was above it, the changes in the SSSEP between classes would be the same as those between an SSSEP being generated and neutral EEG, with differences in α activity between the low-amplitude SSSEP and neutral activity due to the differences in the user’s level of concentration from attending the low-amplitude stimulus. The novel SSSEP BCI paradigm performed on a par with the standard paradigm with an average 61.8% classification accuracy over 16 participants, compared to an average 63.3% classification accuracy respectively, indicating that the hypothesis was false. However, the large presence of electro-magnetic interference (EMI) in the EEG recordings may have compromised the data. Many different noise suppression methods were applied to the stimulus device and the data, and whilst the EMI artefacts were reduced in magnitude they were not eliminated completely. Even with the noise the standard SSSEP stimulus paradigm performed on a par with studies that used the same paradigm, indicating that the results may not have been invalidated by the EMI. Overall the thesis shows that motor-imagery signals are inherently spatial in difference, and that the novel methods of T-MEMD and ST-MEMD may yet out-perform the existing methods of EMD and MEMD if applied to signals that are temporal in nature, such as functional Magnetic Resonance Imaging (fMRI). Whilst the novel SSSEP paradigm did not result in an increase in performance, it highlighted the impact of EMI from stimulus equipment on EEG recordings and potentially confirmed that the amplitude of SSEP stimuli is a minor factor in a BCI paradigm

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London
    corecore