2,176 research outputs found

    Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations

    Get PDF
    Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions

    Brain-Computer Interface Based on Generation of Visual Images

    Get PDF
    This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier

    Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms

    Get PDF
    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance

    Graphene textiles towards soft wearable interfaces for electroocular remote control of objects

    Get PDF
    Study of eye movements (EMs) and measurement of the resulting biopotentials, referred to as electrooculography (EOG), may find increasing use in applications within the domain of activity recognition, context awareness, mobile human-computer interaction (HCI) applications, and personalized medicine provided that the limitations of conventional “wet” electrodes are addressed. To overcome the limitations of conventional electrodes, this work, reports for the first time the use and characterization of graphene-based electroconductive textile electrodes for EOG acquisition using a custom-designed embedded eye tracker. This self-contained wearable device consists of a headband with integrated textile electrodes and a small, pocket-worn, battery-powered hardware with real-time signal processing which can stream data to a remote device over Bluetooth. The feasibility of the developed gel-free, flexible, dry textile electrodes was experimentally authenticated through side-by-side comparison with pre-gelled, wet, silver/silver chloride (Ag/AgCl) electrodes, where the simultaneously and asynchronous recorded signals displayed correlation of up to ~87% and ~91% respectively over durations reaching hundred seconds and repeated on several participants. Additionally, an automatic EM detection algorithm is developed and the performance of the graphene-embedded “all-textile” EM sensor and its application as a control element toward HCI is experimentally demonstrated. The excellent success rate ranging from 85% up to 100% for eleven different EM patterns demonstrates the applicability of the proposed algorithm in wearable EOG-based sensing and HCI applications with graphene textiles. The system-level integration and the holistic design approach presented herein which starts from fundamental materials level up to the architecture and algorithm stage is highlighted and will be instrumental to advance the state-of-the-art in wearable electronic devices based on sensing and processing of electrooculograms

    Design of a Wearable Eye-Movement Detection System Based on Electrooculography Signals and Its Experimental Validation.

    Full text link
    In the assistive research area, human-computer interface (HCI) technology is used to help people with disabilities by conveying their intentions and thoughts to the outside world. Many HCI systems based on eye movement have been proposed to assist people with disabilities. However, due to the complexity of the necessary algorithms and the difficulty of hardware implementation, there are few general-purpose designs that consider practicality and stability in real life. Therefore, to solve these limitations and problems, an HCI system based on electrooculography (EOG) is proposed in this study. The proposed classification algorithm provides eye-state detection, including the fixation, saccade, and blinking states. Moreover, this algorithm can distinguish among ten kinds of saccade movements (i.e., up, down, left, right, farther left, farther right, up-left, down-left, up-right, and down-right). In addition, we developed an HCI system based on an eye-movement classification algorithm. This system provides an eye-dialing interface that can be used to improve the lives of people with disabilities. The results illustrate the good performance of the proposed classification algorithm. Moreover, the EOG-based system, which can detect ten different eye-movement features, can be utilized in real-life applications

    A Python-based Brain-Computer Interface Package for Neural Data Analysis

    Get PDF
    Anowar, Md Hasan, A Python-based Brain-Computer Interface Package for Neural Data Analysis. Master of Science (MS), December, 2020, 70 pp., 4 tables, 23 figures, 74 references. Although a growing amount of research has been dedicated to neural engineering, only a handful of software packages are available for brain signal processing. Popular brain-computer interface packages depend on commercial software products such as MATLAB. Moreover, almost every brain-computer interface software is designed for a specific neuro-biological signal; there is no single Python-based package that supports motor imagery, sleep, and stimulated brain signal analysis. The necessity to introduce a brain-computer interface package that can be a free alternative for commercial software has motivated me to develop a toolbox using the python platform. In this thesis, the structure of MEDUSA, a brain-computer interface toolbox, is presented. The features of the toolbox are demonstrated with publicly available data sources. The MEDUSA toolbox provides a valuable tool to biomedical engineers and computational neuroscience researchers

    Is implicit motor imagery a reliable strategy for a brain computer interface?

    Get PDF
    Explicit motor imagery (eMI) is a widely used brain computer interface (BCI) paradigm, but not everybody can accomplish this task. Here we propose a BCI based on implicit motor imagery (iMI). We compared classification accuracy between eMI and iMI of hands. Fifteen able bodied people were asked to judge the laterality of hand images presented on a computer screen in a lateral or medial orientation. This judgement task is known to require mental rotation of a person’s own hands which in turn is thought to involve iMI. The subjects were also asked to perform eMI of the hands. Their electroencephalography (EEG) was recorded. Linear classifiers were designed based on common spatial patterns. For discrimination between left and right hand the classifier achieved maximum of 81 ± 8% accuracy for eMI and 83 ± 3% for iMI. These results show that iMI can be used to achieve similar classification accuracy as eMI. Additional classification was performed between iMI in medial and lateral orientations of a single hand; the classifier achieved 81 ± 7% for the left and 78 ± 7% for the right hand which indicate distinctive spatial patterns of cortical activity for iMI of a single hand in different directions. These results suggest that a special brain computer interface based on iMI may be constructed, for people who cannot perform explicit imagination, for rehabilitation of movement or for treatment of bodily spatial neglect

    Upravljanje robotom pomoću anticipacijskih potencijala mozga

    Get PDF
    Recently Biomedical Engineering showed advances in using brain potentials for control of physical devices, in particular, robots. This paper is focused on controlling robots using anticipatory brain potentials. An oscillatory brain potential generated in the CNV Flip-Flop Paradigm is used to trigger sequence of robot behaviors. Experimental illustration is given in which two robotic arms, driven by a brain expectancy potential oscillation, cooperatively solve the well known problem of Towers of Hanoi.U posljednje vrijeme je u području biomedicinskog inženjerstva postignut napredak u korištenju potencijala mozga za upravljanje fizičkim napravama, posebice robotima. U radu je opisana mogućnost upravljanja robotima pomoću anticipacijskih potencijala mozga. Oscilacijski potencijal mozga generiran u CNV (Contingent Negative Variation) flip-flop paradigmi se koristi za okidanje slijeda ponašanja robota. U radu je prikazana eksperimentalna ilustracija rješavanja dobro poznatog problema Hanojskih tornjeva pomoću dvije robotske ruke upravljane moždanim potencijalom očekivanja
    corecore