15 research outputs found
Classification of Spike-wave Discharge with STFT Approach
Spike-Wave Discharge (STD) is an abnormal brainwave pattern in the brain area that has possibility of generating an epilepsy seizure. The brainwave can be recorded by using Electroencephalogram (EEG) device. The purpose of this paper is to classify STD that occurred in epilepsy patient using k-Nearest Neighbor (kNN) with Short-Time Fourier Transform (STFT) approach. The EEG signals were downloaded from an established website that consisted of epilepsy and non-epilepsy samples. The process of artifact removal was done to ensure that the generated EEG signals and STFT were clean. Then, energy is extracted from STFT for four bands, namely Delta-band, Theta-band, Alpha-band and Beta-band. The experimental result showed that the kNN was able to classify the STD waves with 100% accuracy for the tested ratio training of 80:20.
Classification of EEG Signal for Body Earthing Application
Stress is the way our body reacts to the threat and any kind of demand. Stress happens when your nervous system releases the stress hormones including adrenaline and cortisol that lead to an emergency response of the body. Body earthing technique is used to resolve this problem. Body earthing is a method that is used to neutralize positive and negative charge in the human body by connecting to the earth. EEG signals can be used to verify the positive effect of body earthing. This project focuses on the classification of EEG signals for body earthing application. First, EEG signals from human brainwaves were recorded by using Emotive EPOC Headset, before and after body earthing for the 30 subjects. The alpha band and the Beta band were filtered by using Band-pass filter âButterworthâ. After filtering, the threshold of signal amplitude was set in the range of -100 ÎŒV to 100 ÎŒV in order to remove the noise or artifact. For feature extraction, Short-time Fourier Transform (STFT) and Continuous Wavelet Transform (CWT) were used. Lastly, the Artificial Neural Network (ANN) model is employed to classify EEG signal taken from samples, before and after the body earthing. A number of neurons chosen for this project are 55 with the mean square error 0.0023738. The result showed that Alpha band signals before body earthing are low compared to after body earthing. Whereas, for the Beta band signals, the result before body earthing is high compared to after body earthing. The increased signals of the Alpha band show that subjects are in relax state, while the decreased of Beta band signals shows the sample in stress state. These results imply for both features of STFT and CWT. Based on the confusion matrix, the result for the ANN classification yields 86.7% accuracy
Developing Machine Learning Algorithms for Behavior Recognition from Deep Brain Signals
Parkinsonâs disease (PD) is a neurodegenerative condition and movement disorder that appears with symptoms such as tremor, rigidity of muscles and slowness of movements. Deep brain stimulation (DBS) is an FDA-approved surgical therapy for essential tremor and PD. Despite the fact that DBS substantially alleviates the motor signs of PD, it can cause cognitive side effects and speech malfunction mainly due to the lack of adaptivity and optimality of the stimulation signal to the patientsâ current state. A behavior-adapted closed-loop DBS system may reduce the side effects and power consumption by adjusting the stimulation parameters to patientsâ need.
Behavior recognition based on physiological feedbacks plays a key role in designing the next generation of closed-loop DBS systems. Hence, this dissertation is concentrated on: 1. Investigating the capability of local field potential (LFP) signals recorded from Subthalamic nucleus (STN) in identifying behavioral activities 2. Developing advanced machine learning algorithms to recognize behavioral activities using LFP signals 3. Investigating the effects of medication and stimulation pulse on the behavior recognition task as well as characteristics of the LFP signal.
STN-LFP is a great physiological signal candidate since the stimulation device itself can record it, eliminating the need for additional sensors. Continuous wavelet transform is utilized for time-frequency analysis of STN-LFPs. Experimental results demonstrate that different behaviors create different modulation patterns in STN within the beta frequency range.
A hierarchical classification structure is proposed to perform the behavior classification through a multi-level framework. The beta frequency components of STN-LFPs recorded from all contacts of DBS leads are combined through an MKL-based SVM classifier for behavior classification. Alternatively, the inter-hemispheric synchronization of the LFP signals measured by an FFT-based synchronization approach is utilized to pair up the LFP signals from left and right STNs. Using these rearranged LFP signals reduces the computational cost significantly while keeping the classification ability almost unchanged.
LFP-Net, a customized deep convolutional neural network (CNN) approach for behavior classification, is also proposed. CNNs learn different feature maps based on the beta power patterns associated with different behaviors. The features extracted by CNNs are passed through fully connected layers, and, then to the softmax layer for classification.
The effect of medication and stimulation âoff/onâ conditions on characteristics of LFP signals and the behavior classification performance is studied. The beta power of LFP signals under different stimulation and medication paradigms is investigated. Experimental results confirm that the beta power is suppressed significantly when the patients take medication or therapeutic stimulation. The results also show that the behavior classification performance is not impacted by different medication or stimulation conditions.
Identifying human behavioral activities from physiological signals is a stepping-stone toward adaptive closed-loop DBS systems. To design such systems, however, there are other open questions that need to be addressed, which are beyond the scope of this dissertation, such as developing event-related biomarkers, customizing the parameter of DBS system based on the patientsâ current state, investigating the power consumption and computational complexity of the behavior recognition algorithms
Optimization of Electroencephalograph-Based Classification for Imaginary Motion Brain Computer Interface Study
Using Electroencephalography (EEG) to detect imaginary motions from brain waves, to interface human and computer is a very nascent and challenging field that started developing rapidly in the past few decades. The technique involves some electrodes attached on the scalp of the patient and the signals generated by the brain while the thought process of the patient is captured and recorded in a computer. This technique of human and computer interfacing is termed as Brain Computer Interface (BCI). Disability is a serious problem of our nation and hence BCI is extremely important in case of people who are incapable of communicating due to spinal cord injury. This technique uses the brain signals to make decisions, control objects and communicate with the world using brain integration with peripheral devices and systems. This requires some intelligence to classify these motions. Neural network have been used as a mean to classify motions, however, the accuracy of classification for certain motion was limited. The novelty of the proposed approach is in using a majority vote system for a network of artificial neural networks (ANNs) that is used to optimally classify imaginary motions performed by multiple subjects. Three kinds of imaginary motionswere classified which are imaginary left hand movement, imaginary right hand movement, and imagination of words starting with the same letter. Using an optimized set of electrodes, classification accuracywas optimized for the three users as a group and also individually. The optimization procedure was conducted based on the rank of the electrodes 2 according to their individual classification accuracy, and the eliminating electrodes with the lowest accuracies. The group optimization of 3 subjects altogether resulted in an electrode structure consisting of 15 electrodes with a relatively high classification accuracy of almost 80%. The individual optimization for each subject resulted in an electrode structure of 20 for subject 1 and subject 3 with classification accuracies of 63:63% and 84:33% respectively and single electrode structure for subject 2 with an accuracy of 94:01%. The overall average classification accuracy of all the users with the individual optimization of electrodes was as high as 82:32%
Evaluating EEGâEMG Fusion-Based Classification as a Method for Improving Control of Wearable Robotic Devices for Upper-Limb Rehabilitation
Musculoskeletal disorders are the biggest cause of disability worldwide, and wearable mechatronic rehabilitation devices have been proposed for treatment. However, before widespread adoption, improvements in user control and system adaptability are required. User intention should be detected intuitively, and user-induced changes in system dynamics should be unobtrusively identified and corrected. Developments often focus on model-dependent nonlinear control theory, which is challenging to implement for wearable devices.
One alternative is to incorporate bioelectrical signal-based machine learning into the system, allowing for simpler controller designs to be augmented by supplemental brain (electroencephalography/EEG) and muscle (electromyography/EMG) information. To extract user intention better, sensor fusion techniques have been proposed to combine EEG and EMG; however, further development is required to enhance the capabilities of EEGâEMG fusion beyond basic motion classification. To this end, the goals of this thesis were to investigate expanded methods of EEGâEMG fusion and to develop a novel control system based on the incorporation of EEGâEMG fusion classifiers.
A dataset of EEG and EMG signals were collected during dynamic elbow flexionâextension motions and used to develop EEGâEMG fusion models to classify task weight, as well as motion intention. A variety of fusion methods were investigated, such as a Weighted Average decision-level fusion (83.01 ± 6.04% accuracy) and Convolutional Neural Network-based input-level fusion (81.57 ± 7.11% accuracy), demonstrating that EEGâEMG fusion can classify more indirect tasks.
A novel control system, referred to as a Task Weight Selective Controller (TWSC), was implemented using a Gain Scheduling-based approach, dictated by external load estimations from an EEGâEMG fusion classifier. To improve system stability, classifier prediction debouncing was also proposed to reduce misclassifications through filtering. Performance of the TWSC was evaluated using a developed upper-limb brace simulator. Due to simulator limitations, no significant difference in error was observed between the TWSC and PID control. However, results did demonstrate the feasibility of prediction debouncing, showing it provided smoother device motion. Continued development of the TWSC, and EEGâEMG fusion techniques will ultimately result in wearable devices that are able to adapt to changing loads more effectively, serving to improve the user experience during operation
A framework to measure human behaviour whilst reading
The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a personâs head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research
A framework to measure human behaviour whilst reading
The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a personâs head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research
Recommended from our members
EVA London 2022: Electronic Visualisation and the Arts
The Electronic Visualisation and the Arts London 2022 Conference (EVA London 2022) is co-sponsored by the Computer Arts Society (CAS) and BCS, the Chartered Institute for IT, of which the CAS is a Specialist Group. Of course, this has been a difficult time for all conferences, with the Covid-19 pandemic. For the first time since 2019, the EVA London 2022 Conference is a physical conference. It is also an online conference, as it was in the previous two years. We continue with publishing the proceedings, both online, with open access via ScienceOpen, and also in our traditional printed form, for the second year in full colour. Over recent decades, the EVA London Conference on Electronic Visualisation and the Arts has established itself as one of the United Kingdomâs most innovative and interdisciplinary conferences. It brings together a wide range of research domains to celebrate a diverse set of interests, with a specialised focus on visualisation. The long and short papers in this volume cover varied topics concerning the arts, visualisations, and IT, including 3D graphics, animation, artificial intelligence, creativity, culture, design, digital art, ethics, heritage, literature, museums, music, philosophy, politics, publishing, social media, and virtual reality, as well as other related interdisciplinary areas.
The EVA London 2022 proceedings presents a wide spectrum of papers, demonstrations, Research Workshop contributions, other workshops, and for the seventh year, the EVA London Symposium, in the form of an opening morning session, with three invited contributors. The conference includes a number of other associated evening events including ones organised by the Computer Arts Society, Art in Flux, and EVA International. As in previous years, there are Research Workshop contributions in this volume, aimed at encouraging participation by postgraduate students and early-career artists, accepted either through the peer-review process or directly by the Research Workshop chair. The Research Workshop contributors are offered bursaries to aid participation. In particular, EVA London liaises with Art in Flux, a London-based group of digital artists. The EVA London 2022 proceedings includes long papers and short âposterâ papers from international researchers inside and outside academia, from graduate artists, PhD students, industry professionals, established scholars, and senior researchers, who value EVA London for its interdisciplinary community. The conference also features keynote talks. A special feature this year is support for Ukrainian culture after its invasion earlier in the year. This publication has resulted from a selective peer review process, fitting as many excellent submissions as possible into the proceedings.
This year, submission numbers were lower than previous years, mostly likely due to the pandemic and a new requirement to submit drafts of long papers for review as well as abstracts. It is still pleasing to have so many good proposals from which to select the papers that have been included. EVA London is part of a larger network of EVA international conferences. EVA events have been held in Athens, Beijing, Berlin, Brussels, California, Cambridge (both UK and USA), Canberra, Copenhagen, Dallas, Delhi, Edinburgh, Florence, Gifu (Japan), Glasgow, Harvard, Jerusalem, Kiev, Laval, London, Madrid, Montreal, Moscow, New York, Paris, Prague, St Petersburg, Thessaloniki, and Warsaw. Further venues for EVA conferences are very much encouraged by the EVA community. As noted earlier, this volume is a record of accepted submissions to EVA London 2022. Associated online presentations are in general recorded and made available online after the conference
Quantifying Quality of Life
Describes technological methods and tools for objective and quantitative assessment of QoL Appraises technology-enabled methods for incorporating QoL measurements in medicine Highlights the success factors for adoption and scaling of technology-enabled methods This open access book presents the rise of technology-enabled methods and tools for objective, quantitative assessment of Quality of Life (QoL), while following the WHOQOL model. It is an in-depth resource describing and examining state-of-the-art, minimally obtrusive, ubiquitous technologies. Highlighting the required factors for adoption and scaling of technology-enabled methods and tools for QoL assessment, it also describes how these technologies can be leveraged for behavior change, disease prevention, health management and long-term QoL enhancement in populations at large. Quantifying Quality of Life: Incorporating Daily Life into Medicine fills a gap in the field of QoL by providing assessment methods, techniques and tools. These assessments differ from the current methods that are now mostly infrequent, subjective, qualitative, memory-based, context-poor and sparse. Therefore, it is an ideal resource for physicians, physicians in training, software and hardware developers, computer scientists, data scientists, behavioural scientists, entrepreneurs, healthcare leaders and administrators who are seeking an up-to-date resource on this subject