241 research outputs found

    Evaluation of Data Processing and Artifact Removal Approaches Used for Physiological Signals Captured Using Wearable Sensing Devices during Construction Tasks

    Get PDF
    Wearable sensing devices (WSDs) have enormous promise for monitoring construction worker safety. They can track workers and send safety-related information in real time, allowing for more effective and preventative decision making. WSDs are particularly useful on construction sites since they can track workers’ health, safety, and activity levels, among other metrics that could help optimize their daily tasks. WSDs may also assist workers in recognizing health-related safety risks (such as physical fatigue) and taking appropriate action to mitigate them. The data produced by these WSDs, however, is highly noisy and contaminated with artifacts that could have been introduced by the surroundings, the experimental apparatus, or the subject’s physiological state. These artifacts are very strong and frequently found during field experiments. So, when there is a lot of artifacts, the signal quality drops. Recently, artifacts removal has been greatly enhanced by developments in signal processing, which has vastly enhanced the performance. Thus, the proposed review aimed to provide an in-depth analysis of the approaches currently used to analyze data and remove artifacts from physiological signals obtained via WSDs during construction-related tasks. First, this study provides an overview of the physiological signals that are likely to be recorded from construction workers to monitor their health and safety. Second, this review identifies the most prevalent artifacts that have the most detrimental effect on the utility of the signals. Third, a comprehensive review of existing artifact-removal approaches were presented. Fourth, each identified artifact detection and removal approach was analyzed for its strengths and weaknesses. Finally, in conclusion, this review provides a few suggestions for future research for improving the quality of captured physiological signals for monitoring the health and safety of construction workers using artifact removal approaches

    Desarrollo de nuevos dispositivos y algoritmos para la monitorización ambulatoria de personas con epilepsia

    Get PDF
    La epilepsia es una enfermedad crónica con un enorme impacto sociosanitario. Aunque en la actualidad se dispone de una gran cantidad de fármacos antiepilépticos y de otros tratamientos más selectivos como la cirugía o la estimulación cerebral, un porcentaje considerable de pacientes no están controlados y continúan teniendo crisis epilépticas. Estas personas suelen vivir condicionadas por la posibilidad de un ataque epiléptico y sus posibles consecuencias, como accidentes, lesiones o incluso la muerte súbita inexplicable. En este contexto, un dispositivo capaz de monitorizar el estado de salud y avisar de un posible ataque epiléptico contribuiría a mejorar la calidad de vida de estas personas. La presente Tesis Doctoral se centra en el desarrollo de un novedoso sistema de monitorización ambulatoria que permita identificar y predecir los ataques epilépticos. Dicho sistema está compuesto por diferentes sensores capaces de registrar de forma sincronizada diferentes señales biomédicas. Mediante técnicas de aprendizaje automático supervisado, se han desarrollado diferentes modelos predictivos capaces de clasificar el estado de la persona epiléptica en normal, preictal (antes de la crisis) e ictal (crisis)

    Complexity Science in Human Change

    Get PDF
    This reprint encompasses fourteen contributions that offer avenues towards a better understanding of complex systems in human behavior. The phenomena studied here are generally pattern formation processes that originate in social interaction and psychotherapy. Several accounts are also given of the coordination in body movements and in physiological, neuronal and linguistic processes. A common denominator of such pattern formation is that complexity and entropy of the respective systems become reduced spontaneously, which is the hallmark of self-organization. The various methodological approaches of how to model such processes are presented in some detail. Results from the various methods are systematically compared and discussed. Among these approaches are algorithms for the quantification of synchrony by cross-correlational statistics, surrogate control procedures, recurrence mapping and network models.This volume offers an informative and sophisticated resource for scholars of human change, and as well for students at advanced levels, from graduate to post-doctoral. The reprint is multidisciplinary in nature, binding together the fields of medicine, psychology, physics, and neuroscience

    STGATE: Spatial-temporal graph attention network with a transformer encoder for EEG-based emotion recognition

    Get PDF
    Electroencephalogram (EEG) is a crucial and widely utilized technique in neuroscience research. In this paper, we introduce a novel graph neural network called the spatial-temporal graph attention network with a transformer encoder (STGATE) to learn graph representations of emotion EEG signals and improve emotion recognition performance. In STGATE, a transformer-encoder is applied for capturing time-frequency features which are fed into a spatial-temporal graph attention for emotion classification. Using a dynamic adjacency matrix, the proposed STGATE adaptively learns intrinsic connections between different EEG channels. To evaluate the cross-subject emotion recognition performance, leave-one-subject-out experiments are carried out on three public emotion recognition datasets, i.e., SEED, SEED-IV, and DREAMER. The proposed STGATE model achieved a state-of-the-art EEG-based emotion recognition performance accuracy of 90.37% in SEED, 76.43% in SEED-IV, and 76.35% in DREAMER dataset, respectively. The experiments demonstrated the effectiveness of the proposed STGATE model for cross-subject EEG emotion recognition and its potential for graph-based neuroscience research

    Leveraging EEG-based speech imagery brain-computer interfaces

    Get PDF
    Speech Imagery Brain-Computer Interfaces (BCIs) provide an intuitive and flexible way of interaction via brain activity recorded during imagined speech. Imagined speech can be decoded in form of syllables or words and captured even with non-invasive measurement methods as for example the Electroencephalography (EEG). Over the last decade, research in this field has made tremendous progress and prototypical implementations of EEG-based Speech Imagery BCIs are numerous. However, most work is still conducted in controlled laboratory environments with offline classification and does not find its way to real online scenarios. Within this thesis we identify three main reasons for these circumstances, namely, the mentally and physically exhausting training procedures, insufficient classification accuracies and cumbersome EEG setups with usually high-resolution headsets. We furthermore elaborate on possible solutions to overcome the aforementioned problems and present and evaluate new methods in each of the domains. In detail we introduce two new training concepts for imagined speech BCIs, one based on EEG activity during silently reading and the other recorded during overtly speaking certain words. Insufficient classification accuracies are addressed by introducing the concept of a Semantic Speech Imagery BCI, which classifies the semantic category of an imagined word prior to the word itself to increase the performance of the system. Finally, we investigate on different techniques for electrode reduction in Speech Imagery BCIs and aim at finding a suitable subset of electrodes for EEG-based imagined speech detection, therefore facilitating the cumbersome setups. All of our presented results together with general remarks on experiences and best practice for study setups concerning imagined speech are summarized and supposed to act as guidelines for further research in the field, thereby leveraging Speech Imagery BCIs towards real-world application.Speech Imagery Brain-Computer Interfaces (BCIs) bieten eine intuitive und flexible Möglichkeit der Interaktion mittels Gehirnaktivität, aufgezeichnet während der bloßen Vorstellung von Sprache. Vorgestellte Sprache kann in Form von Silben oder Wörtern auch mit nicht-invasiven Messmethoden wie der Elektroenzephalographie (EEG) gemessen und entschlüsselt werden. In den letzten zehn Jahren hat die Forschung auf diesem Gebiet enorme Fortschritte gemacht, und es gibt zahlreiche prototypische Implementierungen von EEG-basierten Speech Imagery BCIs. Die meisten Arbeiten werden jedoch immer noch in kontrollierten Laborumgebungen mit Offline-Klassifizierung durchgeführt und finden nicht denWeg in reale Online-Szenarien. In dieser Arbeit identifizieren wir drei Hauptgründe für diesen Umstand, nämlich die geistig und körperlich anstrengenden Trainingsverfahren, unzureichende Klassifizierungsgenauigkeiten und umständliche EEG-Setups mit meist hochauflösenden Headsets. Darüber hinaus erarbeiten wir mögliche Lösungen zur Überwindung der oben genannten Probleme und präsentieren und evaluieren neue Methoden für jeden dieser Bereiche. Im Einzelnen stellen wir zwei neue Trainingskonzepte für Speech Imagery BCIs vor, von denen eines auf der Messung von EEG-Aktivität während des stillen Lesens und das andere auf der Aktivität während des Aussprechens bestimmter Wörter basiert. Unzureichende Klassifizierungsgenauigkeiten werden durch die Einführung des Konzepts eines Semantic Speech Imagery BCI angegangen, das die semantische Kategorie eines vorgestellten Wortes vor dem Wort selbst klassifiziert, um die Performance des Systems zu erhöhen. Schließlich untersuchen wir verschiedene Techniken zur Elektrodenreduktion bei Speech Imagery BCIs und zielen darauf ab, eine geeignete Teilmenge von Elektroden für die EEG-basierte Erkennung von vorgestellter Sprache zu finden, um so die umständlichen Setups zu erleichtern. Alle unsere Ergebnisse werden zusammen mit allgemeinen Bemerkungen zu Erfahrungen und Best Practices für Studien-Setups bezüglich vorgestellter Sprache zusammengefasst und sollen als Richtlinien für die weitere Forschung auf diesem Gebiet dienen, um so Speech Imagery BCIs für die Anwendung in der realenWelt zu optimieren

    Measuring spectrally resolved information processing in neural data

    Get PDF
    Background: The human brain, an incredibly complex biological system comprising billions of neurons and trillions of synapses, possesses remarkable capabilities for information processing and distributed computations. Neurons, the fundamental building blocks, perform elementary operations on their inputs and collaborate extensively to execute intricate computations, giving rise to cognitive functions and behavior. Notably, distributed information processing in the brain heavily relies on rhythmic neural activity characterized by synchronized oscillations at specific frequencies. These oscillations play a crucial role in coordinating brain activity and facilitating communication between different neural circuits [1], effectively acting as temporal windows that enable efficient information exchange within specific frequency ranges. To understand distributed information processing in neural systems, breaking down its components, i.e., —information transfer, storage, and modification can be helpful, but requires precise mathematical definitions for each respective component. Thankfully, these definitions have recently become available [2]. Information theory is a natural choice for measuring information processing, as it offers a mathematically complete description of the concept of information and communication. The fundamental information-processing operations, are considered essential prerequisites for achieving universal information processing in any system [3]. By quantifying and analyzing these operations, we gain valuable insights into the brain’s complex computation and cognitive abilities. As information processing in the brain is intricately tied to rhythmic behavior, there is a need to establish a connection between information theoretic measures and frequency components. Previous attempts to achieve frequency-resolved information theoretic measures have mostly relied on narrowband filtering [4], which comes with several known issues of phase shifting and high false positive rate results [5], or simplifying the computation to few variables [6], that might result in missing important information in the analysed brain signals. Therefore, the current work aims to establish a frequency-resolved measure of two crucial components of information processing: information transfer and information storage. By proposing methodological advancements, this research seeks to shed light on the role of neural oscillations in information processing within the brain. Furthermore, a more comprehensive investigation was carried out on the communication between two critical brain regions responsible for motor inhibition in the frontal cortex (right Inferior Frontal gyrus (rIFG) and pre-Supplementary motor cortex (pre-SMA)). Here, neural oscillations in the beta band (12 − 30 Hz) have been proposed to have a pivotal role in response inhibition. A long-standing question in the field was to disentangle which of these two brain areas first signals the stopping process and drives the other [7]. Furthermore, it was hypothesized that beta oscillations carry the information transfer between these regions. The present work addresses the methodological problems and investigates spectral information processing in neural data, in three studies. Study 1 focuses on the critical role of information transfer, measured by transfer entropy, in distributed computation. Understanding the patterns of information transfer is essential for unraveling the computational algorithms in complex systems, such as the brain. As many natural systems rely on rhythmic processes for distributed computations, a frequency-resolved measure of information transfer becomes highly valuable. To address this, a novel algorithm is presented, efficiently identifying frequencies responsible for sending and receiving information in a network. The approach utilizes the invertible maximum overlap discrete wavelet transform (MODWT) to create surrogate data for computing transfer entropy, eliminating issues associated with phase shifts and filtering. However, measuring frequency-resolved information transfer poses a Partial information decomposition problem [8] that is yet to be fully resolved. The algorithm’s performance is validated using simulated data and applied to human magnetoencephalography (MEG) and ferret local field potential recordings (LFP). In human MEG, the study unveils a complex spectral configuration of cortical information transmission, showing top-down information flow from very high frequencies (above 100Hz) to both similarly high frequencies and frequencies around 20Hz in the temporal cortex. Contrary to the current assumption, the findings suggest that low frequencies do not solely send information to high frequencies. In the ferret LFP, the prefrontal cortex demonstrates the transmission of information at low frequencies, specifically within the range of 4-8 Hz. On the receiving end, V1 exhibits a preference for operating at very high frequency > 125 Hz. The spectrally resolved transfer entropy promises to deepen our understanding of rhythmic information exchange in natural systems, shedding light on the computational properties of oscillations on cognitive functions. In study 2, the primary focus lay on the second fundamental aspect of information processing: the active information storage (AIS). The AIS estimates how much information in the next measurements of the process can be predicted by examining its paste state. In processes that either produce little information (low entropy) or that are highly unpredictable, the AIS is low, whereas processes that are predictable but visit many different states with equal probabilities, exhibit high AIS [9]. Within this context, we introduced a novel spectrally-resolved AIS. Utilizing intracortical recordings of neural activity in anesthetized ferrets before and after loss of consciousness (LOC), the study reveals that the modulation of AIS by anesthesia is highly specific to different frequency bands, cortical layers, and brain regions. The findings reveal that the effects of anesthesia on AIS are prominent in the supragranular layers for the high/low gamma band, while the alpha/beta band exhibits the strongest decrease in AIS at infragranular layers, in accordance with the predictive coding theory. Additionally, the isoflurane impacts local information processing in a frequency-specific manner. For instance, increases in isoflurane concentration lead to a decrease in AIS in the alpha frequency but to an increase in AIS in the delta frequency range (<2Hz). In sum, analyzing spectrally-resolved AIS provides valuable insights into changes in cortical information processing under anesthesia. With rhythmic neural activity playing a significant role in biological neural systems, the introduction of frequency-specific components in active information storage allows a deeper understanding of local information processing in different brain areas and under various conditions. In study 3, to further verify the pivotal role of neural oscillations in information processing, we investigated the neural network mechanisms underlying response inhibition. A long-standing debate has centered around identifying the cortical initiator of response inhibition in the beta band, with two main regions proposed: the right rIFG and the pre-SMA. This third study aimed to determine which of these regions is activated first and exerts a potential information exchange on the other. Using high temporal resolution magnetoencephalography (MEG) and a relatively large cohort of subjects. A significant breakthrough is achieved by demonstrating that the rIFG is activated significantly earlier than the pre-SMA. The onset of beta band activity in the rIFG occurred at around 140 ms after the STOP signal. Further analyses showed that the beta-band activity in the rIFG was crucial for successful stopping, as evidenced by its predictive value for stopping performance. Connectivity analysis revealed that the rIFG sends information in the beta band to the pre-SMA but not vice versa, emphasizing the rIFG’s dominance in the response inhibition process. The results provide strong support for the hypothesis that the rIFG initiates stopping and utilizes beta-band oscillations for this purpose. These findings have significant implications, suggesting the possibility of spatially localized oscillation based interventions for response inhibition. Conclusion: In conclusion, the present work proposes a novel algorithm for uncovering the frequencies at which information is transferred between sources and targets in the brain, providing valuable insights into the computational dynamics of neural processes. The spectrally resolved transfer entropy was successfully applied to experimental neural data of intracranial recordings in ferrets and MEG recordings of humans. Furthermore, the study on active information storage (AIS) analysis under anesthesia revealed that the spectrally resolved AIS offers unique additional insights beyond traditional spectral power analysis. By examining changes in neural information processing, the study demonstrates how AIS analysis can deepen the understanding of anesthesia’s effects on cortical information processing. Moreover, the third study’s findings provide strong evidence supporting the critical role of beta oscillations in information processing, particularly in response inhibition. The research successfully demonstrates that beta oscillations in the rIFG functions as the key initiator of the response inhibition process, acting as a top-down control mechanism. The identification of beta oscillations as a crucial factor in information processing opens possibilities for further research and targeted interventions in neurological disorders. Taken together, the current work highlights the role of spectrally-resolved information processing in neural systems by not only introducing novel algorithms, but also successfully applying them to experimental oscillatory neural activity in relation to low-level cortical information processing (anesthesia) as well as high-level processes (cognitive response inhibition)

    Motor Imagery EEG Classification Based on a Weighted Multi-branch Structure Suitable for Multisubject Data

    Get PDF
    Objective : Electroencephalogram (EEG) signal recognition based on deep learning technology requires the support of sufficient data. However, training data scarcity usually occurs in subject-specific motor imagery tasks unless multisubject data can be used to enlarge training data. Unfortunately, because of the large discrepancies between data distributions from different subjects, model performance could only be improved marginally or even worsened by simply training on multisubject data. Method : This paper proposes a novel weighted multi-branch (WMB) structure for handling multisubject data to solve the problem, in which each branch is responsible for fitting a pair of source-target subject data and adaptive weights are used to integrate all branches or select branches with the largest weights to make the final decision. The proposed WMB structure was applied to six well-known deep learning models (EEGNet, Shallow ConvNet, Deep ConvNet, ResNet, MSFBCNN, and EEG_TCNet) and comprehensive experiments were conducted on EEG datasets BCICIV-2a, BCICIV-2b, high gamma dataset (HGD) and two supplementary datasets. Result : Superior results against the state-of-the-art models have demonstrated the efficacy of the proposed method in subject-specific motor imagery EEG classification. For example, the proposed WMB_EEGNet achieved classification accuracies of 84.14%, 90.23%, and 97.81% on BCICIV-2a, BCICIV-2b and HGD, respectively. Conclusion : It is clear that the proposed WMB structure is capable to make good use of multisubject data with large distribution discrepancies for subject-specific EEG classification

    Temporal-frequency-phase feature classification using 3D-convolutional neural networks for motor imagery and movement

    Get PDF
    Recently, convolutional neural networks (CNNs) have been widely applied in brain-computer interface (BCI) based on electroencephalogram (EEG) signals. Due to the subject-specific nature of EEG signal patterns and the multi-dimensionality of EEG features, it is necessary to employ appropriate feature representation methods to enhance the decoding accuracy of EEG. In this study, we proposed a method for representing EEG temporal, frequency, and phase features, aiming to preserve the multi-domain information of EEG signals. Specifically, we generated EEG temporal segments using a sliding window strategy. Then, temporal, frequency, and phase features were extracted from different temporal segments and stacked into 3D feature maps, namely temporal-frequency-phase features (TFPF). Furthermore, we designed a compact 3D-CNN model to extract these multi-domain features efficiently. Considering the inter-individual variability in EEG data, we conducted individual testing for each subject. The proposed model achieved an average accuracy of 89.86, 78.85, and 63.55% for 2-class, 3-class, and 4-class motor imagery (MI) classification tasks, respectively, on the PhysioNet dataset. On the GigaDB dataset, the average accuracy for 2-class MI classification was 91.91%. For the comparison between MI and real movement (ME) tasks, the average accuracy for the 2-class were 87.66 and 80.13% on the PhysioNet and GigaDB datasets, respectively. Overall, the method presented in this paper have obtained good results in MI/ME tasks and have a good application prospect in the development of BCI systems based on MI/ME
    corecore