6 research outputs found

    Design and development of eye movement data acquisition kit

    Get PDF
    There are several researches that have been done to improve the life among tetraplegia. It has become an attractive research field in the rehabilitation engineering because the eye movement have abilities as a communication tool for disabled people. However, the previous research has not much provided an appropriate design for the user among tetraplegia. Motivated from that, a new design of eye movement data acquisition kit has been developed. This paper aims to describe the design of eye movement data acquisition kit for the user among tetraplegia based on the proper electrode positions, the prototype as well as the signal conditioning circuits. Then, this EOG kit was used to acquire the eye signals for eye movement in the left and right direction. The eye movement data was obtained from the kit, which can be used as a significant communication tool among tetraplegia. The results show that the kit equipped with a proper signal conditioning is able to acquire the eye movement signal

    Noncontact brain-computer interface based on steady-state pupil light reflex using independent bilateral eyes stimulation

    Get PDF
    Steady-state visual evoked potential (SSVEP), which uses blinking light stimulation to estimate the attending target, has been known as a communication technique with severe motor disabilities such as ALS and Locked-in-syndrome. Recently, it was reported that pupil diameter vibration based on pupillary light reflex has been observed in the attending target with a constant blinking frequency. This fact suggests the possibility of a noncontact BCI using pupillometers as alternatives to contacting scalp electrodes. In this study, we show an increment in the number of communication channels by stimulating both eyes alone or in combination with different frequencies. The number of selective targets becomes twice the number of frequencies using this method. Experiments are conducted by recruiting three healthy participants. We prepare six target patterns comprising three frequencies and detect the target using a coefficient of correlation of power spectrum between the pupil diameter and stimulus signal. Consequently, the average classification accuracy of the three participants of approximately 83.4% is achieved. The findings of this study demonstrate the feasibility of noncontact BCI systems

    Classification of EEG signals for facial expression and motor execution with deep learning

    Get PDF
    Recently, algorithms of machine learning are widely used with the field of electroencephalography (EEG) brain-computer interfaces (BCI). The preprocessing stage for the EEG signals is performed by applying the principle component analysis (PCA) algorithm to extract the important features and reducing the data redundancy. A model for classifying EEG, time series, signals for facial expression and some motor execution processes had been designed. A neural network of three hidden layers with deep learning classifier had been used in this work. Data of four different subjects were collected by using a 14 channels Emotiv EPOC+ device. EEG dataset samples including ten action classes for the facial expression and some motor execution movements are recorded. A classification results with accuracy range (91.25-95.75%) for the collected samples were obtained with respect to: number of samples for each class, total number of EEG dataset samples and type of activation function within the hidden and the output layer neurons. A time series EEG signal was taken as signal values not as image or histogram, analysed and classified with deep learning to obtain the satisfied results of accuracy

    Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks

    Get PDF
    Utilizing biomedical signals as a basis to calculate the human affective states is an essential issue of affective computing (AC). With the in-depth research on affective signals, the combination of multi-model cognition and physiological indicators, the establishment of a dynamic and complete database, and the addition of high-tech innovative products become recent trends in AC. This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals. General signal process tools and pre-processing methods were applied firstly, such as Kalman filter, windowing with hamming, short-time Fourier transform (SIFT), and fast Fourier transform (FTT). Secondly, the eye-moving and tracking signals were converted into images. A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy, and sad) of 16 participants. Finally, the performance of DGCNN was compared with a decision tree (DT), Bayesian Gaussian model (BGM), and k-nearest neighbor (KNN) by using indices of true positive rate (TPR) and false negative rate (FPR). Customizing mini-batch, loss, learning rate, and gradients definition for the training structure of the deep neural network was also deployed finally. The predictive classification matrix showed the effectiveness of the proposed method for eye moving and tracking signals, which performs more than 87.2% inaccuracy. This research provided a feasible way to find more natural human-computer interaction through eye moving and tracking signals and has potential application on the affective production design process
    corecore