7 research outputs found

    Investigating the impact of visual perspective in a motor imagery-based brain-robot interaction: A pilot study with healthy participants

    Get PDF
    IntroductionMotor Imagery (MI)-based Brain Computer Interfaces (BCI) have raised gained attention for their use in rehabilitation therapies since they allow controlling an external device by using brain activity, in this way promoting brain plasticity mechanisms that could lead to motor recovery. Specifically, rehabilitation robotics can provide precision and consistency for movement exercises, while embodied robotics could provide sensory feedback that can help patients improve their motor skills and coordination. However, it is still not clear whether different types of visual feedback may affect the elicited brain response and hence the effectiveness of MI-BCI for rehabilitation.MethodsIn this paper, we compare two visual feedback strategies based on controlling the movement of robotic arms through a MI-BCI system: 1) first-person perspective, with visual information that the user receives when they view the robot arms from their own perspective; and 2) third-person perspective, whereby the subjects observe the robot from an external perspective. We studied 10 healthy subjects over three consecutive sessions. The electroencephalographic (EEG) signals were recorded and evaluated in terms of the power of the sensorimotor rhythms, as well as their lateralization, and spatial distribution.ResultsOur results show that both feedback perspectives can elicit motor-related brain responses, but without any significant differences between them. Moreover, the evoked responses remained consistent across all sessions, showing no significant differences between the first and the last session.DiscussionOverall, these results suggest that the type of perspective may not influence the brain responses during a MI- BCI task based on a robotic feedback, although, due to the limited sample size, more evidence is required. Finally, this study resulted into the production of 180 labeled MI EEG datasets, publicly available for research purposes

    Domain-Specific Processing Stage for Estimating Single-Trail Evoked Potential Improves CNN Performance in Detecting Error Potential

    Get PDF
    We present a novel architecture designed to enhance the detection of Error Potential (ErrP) signals during ErrP stimulation tasks. In the context of predicting ErrP presence, conventional Convolutional Neural Networks (CNNs) typically accept a raw EEG signal as input, encompassing both the information associated with the evoked potential and the background activity, which can potentially diminish predictive accuracy. Our approach involves advanced Single-Trial (ST) ErrP enhancement techniques for processing raw EEG signals in the initial stage, followed by CNNs for discerning between ErrP and NonErrP segments in the second stage. We tested different combinations of methods and CNNs. As far as ST ErrP estimation is concerned, we examined various methods encompassing subspace regularization techniques, Continuous Wavelet Transform, and ARX models. For the classification stage, we evaluated the performance of EEGNet, CNN, and a Siamese Neural Network. A comparative analysis against the method of directly applying CNNs to raw EEG signals revealed the advantages of our architecture. Leveraging subspace regularization yielded the best improvement in classification metrics, at up to 14% in balanced accuracy and 13.4% in F1-score

    ARX-based EEG data balancing for error potential BCI

    No full text
    Objective. Deep learning algorithms employed in brain computer interfaces (BCIs) need large electroencephalographic (EEG) datasets to be trained. These datasets are usually unbalanced, particularly when error potential (ErrP) experiment are considered, being ErrP's epochs much rarer than non-ErrP ones. To face the issue of unbalance of rare epochs, this paper presents a novel, data balancing methods based on ARX-modelling. Approach. AutoRegressive with eXogenous input (ARX)-models are identified on the EEG data of the 'Monitoring error-related potentials' dataset of the BNCI Horizon 2020 and then employed to generate new synthetic data of the minority class of ErrP epochs. The balanced dataset is used to train a classifier of non-ErrP vs. ErrP epochs based on EEGNet. Main results. Compared to classical techniques (e.g. class weights, CW) for data balancing, the new method outperforms the others in terms of resulting accuracy (i.e. ARX 91.5% 88.3

    Comparison of different emotion stimulation modalities: an EEG signal analysis

    No full text
    : Emotions processing is a complex mechanism that involves different physiological systems. In particular, the Central Nervous System (CNS) is considered to play a key role in this mechanism and one of the main modalities to study the CNS activity is the Electroencephalographic signal (EEG). To elicit emotions, different kinds of stimuli can be used e.g.: audio, visual or a combination of the two. Literature studies focus more on the correct classification of the different types of emotions or which kind of stimulation gives the best performance in terms of classification accuracy. However, it is still unclear how the different stimuli elicit the emotions and which are the results in terms of brain activity. In this paper, we analysed and compared EEG signals given by eliciting emotions using audio and visual stimuli or a combination of the latter two. Data were collected during experiments conducted in our laboratories using IAPS and IADS dataset. Our study confirmed literature physiological studies about emotions highlighting higher brain activity in the frontal and central regions and in the δ and θ bands for each kind of stimulus. However, audio stimulation was found to have higher responses when compared to the other two modalities of stimulation in almost all the comparisons performed. Higher values of the δ/β ratios, an index related to negative emotions, have been achieved when using only sounds as stimuli. Moreover, the same type of stimuli, resulted in higher δ-β coupling, suggesting a better attention control. We concluded that stimulating subjects without letting them know (seeing) what is actually happening may give a higher perception of emotions, even if this mechanism remains highly subjective. Clinical Relevance- This paper suggests that audio stimuli may give higher perception of the elicited emotion resulting in higher brain activity in the physiological areas and more focused subjects. Thus using only audio in emotion related studies may give more reliable and consistent results

    Analysis of the skin conductance and pupil signals for evaluation of emotional elicitation by images and sounds

    No full text
    : Many studies in the literature attempt recognition of emotions through the use of videos or images, but very few have explored the role that sounds have in evoking emotions. In this study we have devised an experimental protocol for elicitation of emotions by using, separately and jointly, images and sounds from the widely used International Affective Pictures System and International Affective Digital Sounds databases. During the experiments we have recorded the skin conductance and pupillary signals and processed them with the goal of extracting indices linked to the autonomic nervous system, thus revealing specific patterns of behavior depending on the different stimulation modalities. Our results show that skin conductance helps discriminate emotions along the arousal dimension, whereas features derived from the pupillary signal are able to discriminate different states along both valence and arousal dimensions. In particular, the pupillary diameter was found to be significantly greater at increasing arousal and during elicitation of negative emotions in the phases of viewing images and images with sounds. In the sound-only phase, on the other hand, the power calculated in the high and very high frequency bands of the pupillary diameter were significantly greater at higher valence (valence ratings > 5). Clinical relevance- This study demonstrates the ability of physiological signals to assess specific emotional states by providing different activation patterns depending on the stimulation through images, sounds and images with sounds. The approach has high clinical relevance as it could be extended to evaluate mood disorders (e.g. depression, bipolar disorders, or just stress), or to use physiological patterns found for sounds in order to study whether hearing aids can lead to increased emotional perception
    corecore