2 research outputs found
Imagined Speech Classification Using EEG and Deep Learning
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain–computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively
Wheelchair Neuro Fuzzy Control and Tracking System Based on Voice Recognition
Autonomous wheelchairs are important tools to enhance the mobility of people with disabilities. Advances in computer and wireless communication technologies have contributed to the provision of smart wheelchairs to suit the needs of the disabled person. This research paper presents the design and implementation of a voice controlled electric wheelchair. This design is based on voice recognition algorithms to classify the required commands to drive the wheelchair. An adaptive neuro-fuzzy controller has been used to generate the required real-time control signals for actuating motors of the wheelchair. This controller depends on real data received from obstacle avoidance sensors and a voice recognition classifier. The wheelchair is considered as a node in a wireless sensor network in order to track the position of the wheelchair and for supervisory control. The simulated and running experiments demonstrate that, by combining the concepts of soft-computing and mechatronics, the implemented wheelchair has become more sophisticated and gives people more mobility