5,006 research outputs found

    EEG Based Emotion Monitoring Using Wavelet and Learning Vector Quantization

    Get PDF
    Emotional identification is necessary for example in Brain Computer Interface (BCI) application and when emotional therapy and medical rehabilitation take place. Some emotional states can be characterized in the frequency of EEG signal, such excited, relax and sad. The signal extracted in certain frequency useful to distinguish the three emotional state. The classification of the EEG signal in real time depends on extraction methods to increase class distinction, and identification methods with fast computing. This paper proposed human emotion monitoring in real time using Wavelet and Learning Vector Quantization (LVQ). The process was done before the machine learning using training data from the 10 subjects, 10 trial, 3 classes and 16 segments (equal to 480 sets of data). Each data set processed in 10 seconds and extracted into Alpha, Beta, and Theta waves using Wavelet. Then they become input for the identification system using LVQ three emotional state that is excited, relax, and sad. The results showed that by using wavelet we can improve the accuracy of 72% to 87% and number of training data variation increased the accuracy. The system was integrated with wireless EEG to monitor emotion state in real time with change each 10 seconds. It takes 0.44 second, was not significant toward 10 seconds

    A real time classification algorithm for EEG-based BCI driven by self-induced emotions

    Get PDF
    Background and objective: The aim of this paper is to provide an efficient, parametric, general, and completely automatic real time classification method of electroencephalography (EEG) signals obtained from self-induced emotions. The particular characteristics of the considered low-amplitude signals (a self-induced emotion produces a signal whose amplitude is about 15% of a really experienced emotion) require exploring and adapting strategies like the Wavelet Transform, the Principal Component Analysis (PCA) and the Support Vector Machine (SVM) for signal processing, analysis and classification. Moreover, the method is thought to be used in a multi-emotions based Brain Computer Interface (BCI) and, for this reason, an ad hoc shrewdness is assumed. Method: The peculiarity of the brain activation requires ad-hoc signal processing by wavelet decomposition, and the definition of a set of features for signal characterization in order to discriminate different self-induced emotions. The proposed method is a two stages algorithm, completely parameterized, aiming at a multi-class classification and may be considered in the framework of machine learning. The first stage, the calibration, is off-line and is devoted at the signal processing, the determination of the features and at the training of a classifier. The second stage, the real-time one, is the test on new data. The PCA theory is applied to avoid redundancy in the set of features whereas the classification of the selected features, and therefore of the signals, is obtained by the SVM. Results: Some experimental tests have been conducted on EEG signals proposing a binary BCI, based on the self-induced disgust produced by remembering an unpleasant odor. Since in literature it has been shown that this emotion mainly involves the right hemisphere and in particular the T8 channel, the classification procedure is tested by using just T8, though the average accuracy is calculated and reported also for the whole set of the measured channels. Conclusions: The obtained classification results are encouraging with percentage of success that is, in the average for the whole set of the examined subjects, above 90%. An ongoing work is the application of the proposed procedure to map a large set of emotions with EEG and to establish the EEG headset with the minimal number of channels to allow the recognition of a significant range of emotions both in the field of affective computing and in the development of auxiliary communication tools for subjects affected by severe disabilities

    The impact of temporal synchronisation imprecision on TRF analyses

    Get PDF
    Human sensory perception requires our brains to extract, encode, and process multiple properties of the sensory input. In the context of continuous sensory signals, such as speech and music, the measured electrical neural activity synchronises to properties such as the acoustic envelope, a phenomenon referred to as neural tracking. The ability of measuring neural tracking with non-invasive neurophysiology constitutes an exciting new opportunity for applied research. For example, it enables the objective assessment of cognitive functions in challenging cohorts and environments by using pleasant, everyday tasks, such as watching videos. However, neural tracking has been mostly studied in controlled, laboratory environments guaranteeing precise synchronisation between the neural signal and the corresponding labels (e.g., speech envelope). There exist various challenges that could impact such a temporal precision in, for instance, out-of-lab scenarios, such as technology (e.g., wireless data acquisition), mobility requirements (e.g., clinical scenarios), and the task (e.g., imagery). Aiming to address this type of challenge, we focus on the predominant scenario of continuous sensory experiments involving listening to speech and music. First a temporal response function analysis is presented on two different datasets to assess the impact of trigger imprecision. Second, a proof-of-concept re-alignment methodology is proposed to determine potential issues with the temporal synchronisation. Finally, a use-case study is presented that demonstrates neural tracking measurements in a challenging scenario involving older individuals with neurocognitive decline in care homes. Significance Statement Human cognitive functions can be studied by measuring neural tracking with non-invasive neurophysiology as participants perform pleasant, everyday tasks, such as listening to music. However, while recent work has encouraged the use of this approach in applied research, it remains unclear how robust neural tracking measurements can be when considering the methodological constraints of applied scenarios. This study determines the impact of a key factor for the measurement of neural tracking: the temporal precision of the neural recording. The results provide clear guidelines for future research, indicating what level of imprecision can be tolerated for measuring neural tracking with speech and music listening tasks in both laboratory and applied settings. Furthermore, the study provides a strategy to assess the impact of imprecision in the synchronisation of the neural recording, thus developing new tools for applied neuroscience

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Analysis of cross-correlations in electroencephalogram signals as an approach to proactive diagnosis of schizophrenia

    Full text link
    We apply flicker-noise spectroscopy (FNS), a time series analysis method operating on structure functions and power spectrum estimates, to study the clinical electroencephalogram (EEG) signals recorded in children/adolescents (11 to 14 years of age) with diagnosed schizophrenia-spectrum symptoms at the National Center for Psychiatric Health (NCPH) of the Russian Academy of Medical Sciences. The EEG signals for these subjects were compared with the signals for a control sample of chronically depressed children/adolescents. The purpose of the study is to look for diagnostic signs of subjects' susceptibility to schizophrenia in the FNS parameters for specific electrodes and cross-correlations between the signals simultaneously measured at different points on the scalp. Our analysis of EEG signals from scalp-mounted electrodes at locations F3 and F4, which are symmetrically positioned in the left and right frontal areas of cerebral cortex, respectively, demonstrates an essential role of frequency-phase synchronization, a phenomenon representing specific correlations between the characteristic frequencies and phases of excitations in the brain. We introduce quantitative measures of frequency-phase synchronization and systematize the values of FNS parameters for the EEG data. The comparison of our results with the medical diagnoses for 84 subjects performed at NCPH makes it possible to group the EEG signals into 4 categories corresponding to different risk levels of subjects' susceptibility to schizophrenia. We suggest that the introduced quantitative characteristics and classification of cross-correlations may be used for the diagnosis of schizophrenia at the early stages of its development.Comment: 36 pages, 6 figures, 2 tables; to be published in "Physica A

    Independent Component Analysis of Event-Related Electroencephalography During Speech and Non-Speech Discrimination: : Implications for the Sensorimotor ∆∞ Rhythm in Speech Processing

    Get PDF
    Background: The functional significance of sensorimotor integration in acoustic speech processing is unclear despite more than three decades of neuroimaging research. Constructivist theories have long speculated that listeners make predictions about articulatory goals functioning to weight sensory analysis toward expected acoustic features (e.g. analysis-by-synthesis; internal models). Direct-realist accounts posit that sensorimotor integration is achieved via a direct match between incoming acoustic cues and articulatory gestures. A method capable of favoring one account over the other requires an ongoing, high-temporal resolution measure of sensorimotor cortical activity prior to and following acoustic input. Although scalp-recorded electroencephalography (EEG) provides a measure of cortical activity on a millisecond time scale, it has low-spatial resolution due to the blurring or mixing of cortical signals on the scalp surface. Recently proposed solutions to the low-spatial resolution of EEG known as blind source separation algorithms (BSS) have made the identification of distinct cortical signals possible. The µ rhythm of the EEG is known to briefly suppress (i.e., decrease in spectral power) over the sensorimotor cortex during the performance, imagination, and observation of biological movements, suggesting that it may provide a sensitive index of sensorimotor integration during speech processing. Neuroimaging studies have traditionally investigated speech perception in two-forced choice designs in which participants discriminate between pairs of speech and nonspeech control stimuli. As such, this classical design was employed in the current dissertation work to address the following specific aims to: 1) isolate independent components with traditional EEG signatures within the dorsal sensorimotor stream network; 2) identify components with features of the sensorimotor µ rhythm and; 3) investigate changes in timefrequency activation of the µ rhythm relative to stimulus type, onset, and discriminability (i.e., perceptual performance). In light of constructivist predictions, it was hypothesized that the µ rhythm would show significant suppression for syllable stimuli prior to and following stimulus onset, with significant differences between correct discrimination trials and those discriminated at chance levels. Methods: The current study employed millisecond temporal resolution EEG to measure ongoing decreases and increases in spectral power (event-related spectral perturbations; ERSPs) prior to, during, and after the onset of acoustic speech and tone-sweep stimuli embedded in white-noise. Sixteen participants were asked to passively listen to or actively identify speech and tone signals in a two-force choice same/different discrimination task. To investigate the role of ERSPs in perceptual identification performance, high signal-to-noise ratios (SNRs) in which speech and tone identification was significantly better than chance (+4dB) and low SNRs in which performance was below chance (-6dB and -18dB) were compared to a baseline of passive noise. Independent component analysis (ICA) of the EEG was used to reduce artifact and source mixing due to volume conduction. Independent components were clustered using measure product methods and cortical source modeling, including spectra, scalp distribution, equivalent current dipole estimation (ECD), and standardized low-resolution tomography (sLORETA). Results: Data analysis revealed six component clusters consistent with a bilateral dorsal-stream sensorimotor network, including component clusters localized to the precentral and postcentral gyrus, cingulate cortex, supplemental motor area, and posterior temporal regions. Timefrequency analysis of the left and right lateralized µ component clusters revealed significant (pFDR\u3c.05) suppression in the traditional beta frequency range (13-30Hz) prior to, during, and following stimulus onset. No significant differences from baseline were found for passive listening conditions. Tone discrimination was different from passive noise in the time period following stimulus onset only. No significant differences were found for correct relative to chance tone stimuli. For both left and right lateralized clusters, early suppression (i.e., prior to stimulus onset) compared to the passive noise baseline was found for the syllable discrimination task only. Significant differences between correct trials and trials identified at chance level were found for the time period following stimulus offset for the syllable discrimination task in left lateralized cluster. Conclusions: As this is the first study to employ BSS methods to isolate components of the EEG during acoustic speech and non-speech discrimination, findings have important implications for the functional role of sensorimotor integration in speech processing. Consistent with expectations, the current study revealed component clusters associated with source models within the sensorimotor dorsal stream network. Beta suppression of the µ component clusters in both the left and right hemispheres is consistent with activity in the precentral gyrus prior to and following acoustic input. As early suppression of the µ was found prior the syllable discrimination task, the present findings favor internal model concepts of speech processing over mechanisms proposed by direct-realists. Significant differences between correct and chance syllable discrimination trials are also consistent with internal model concepts suggesting that sensorimotor integration is related to perceptual performance at the point in time when initial articulatory hypotheses are compared with acoustic input. The relatively inexpensive, noninvasive EEG methodology used in this study may have translational value in the future as a brain computer interface (BCI) approach. As deficits in sensorimotor integration are thought to underlie cognitive-communication impairments in a number of communication disorders, the development of neuromodulatory feedback approaches may provide a novel avenue for augmenting current therapeutic protocols

    Plug-in to fear: game biosensors and negative physiological responses to music

    Get PDF
    The games industry is beginning to embark on an ambitious journey into the world of biometric gaming in search of more exciting and immersive gaming experiences. Whether or not biometric game technologies hold the key to unlock the “ultimate gaming experience” hinges not only on technological advancements alone but also on the game industry’s understanding of physiological responses to stimuli of different kinds, and its ability to interpret physiological data in terms of indicative meaning. With reference to horror genre games and music in particular, this article reviews some of the scientific literature relating to specific physiological responses induced by “fearful” or “unpleasant” musical stimuli, and considers some of the challenges facing the games industry in its quest for the ultimate “plugged-in” experience

    Recognizing Human Emotion patterns by applying Fast Fourier Transform based on Brainwave Features

    Get PDF
    The natural ability of humans to receive messages from the surrounding environment can be obtained through the senses. The senses will respond to stimuli received in various conditions including emotional conditions. Psychologically, recognizing human emotions directly can be assessed from several criteria, such as facial expressions, sounds, or body movements. This research aims to analyze human emotions from the biomedical side through brainwave signals using EEG sensors. The EEG signal obtained will be extracted using Fast Fourier Transform and first-order statistical features. Monitoring of EEG Signals is obtained by grouping based on four emotional conditions (normal, focus, sadness and shock emotions). The results of this research are expected to help improve users in knowing their mental state accurately. The development of this kind of emotional analysis has the potential to create wide applications in the future environment. Research results have shown and compared frequency stimuli from normal emotions, sadness, focus and shock in a variety of situations
    corecore