10,250 research outputs found

    South African sign language dataset development and translation : a glove-based approach

    Get PDF
    Includes bibliographical references.There has been a definite breakdown of communication between the hearing and the Deaf communities. This communication gap drastically effects many facets of a Deaf person’s life, including education, job opportunities and quality of life. Researchers have turned to technology in order to remedy this issue using Automatic Sign Language. While there has been successful research around the world, this is not possible in South Africa as there is no South African Sign Language (SASL) database available. This research aims to develop a SASL static gesture database using a data glove as the first step towards developing a comprehensive database that encapsulates the entire language. Unfortunately commercial data gloves are expensive and so as part of this research, a low-cost data glove will be developed for the application of Automatic Sign Language Translation. The database and data glove will be used together with Neural Networks to perform gesture classification. This will be done in order to evaluate the gesture data collected for the database. This research project has been broken down into three main sections; data glove development, database creation and gesture classification. The data glove was developed by critically reviewing the relevant literature, testing the sensors and then evaluating the overall glove for repeatability and reliability. The final data glove prototype was constructed and five participants were used to collect 31 different static gestures in three different scenarios, which range from isolated gesture collection to continuous data collection. This data was cleaned and used to train a neural network for the purpose of classification. Several training algorithms were chosen and compared to see which attained the highest classification accuracy. The data glove performed well and achieved results superior to some research and on par with other researchers’ results. The data glove achieved a repeatable angle range of 3.27 degrees resolution with a standard deviation of 1.418 degrees. This result is far below the specified 15 degrees resolution required for the research. The device remained low-cost and was more than $100 cheaper than other custom research data gloves and hundreds of dollars cheaper than commercial data gloves. A database was created using five participants and 1550 type 1 gestures, 465 type 2 gestures and 93 type 3 gestures were collected. The Resilient Back-Propagation and Levenberg-Marquardt training algorithms were considered as the training algorithms for the neural network. The Levenberg-Marquardt algorithm had a superior classification accuracy achieving 99.61%, 77.42% and 81.72% accuracy on the type 1, type 2 and type 3 data respectively

    Pattern recognition methods for EMG prosthetic control

    Get PDF
    In this work we focus on pattern recognition methods related to EMG upper-limb prosthetic control. After giving a detailed review of the most widely used classification methods, we propose a new classification approach. It comes as a result of comparison in the Fourier analysis between able-bodied and trans-radial amputee subjects. We thus suggest a different classification method which considers each surface electrodes contribute separately, together with five time domain features, obtaining an average classification accuracy equals to 75% on a sample of trans-radial amputees. We propose an automatic feature selection procedure as a minimization problem in order to improve the method and its robustness

    EEG-EMG Analysis Method in Hybrid Brain Computer Interface for Hand Rehabilitation Training

    Get PDF
    Brain-computer interfaces (BCIs) have demonstrated immense potential in aiding stroke patients during their physical rehabilitation journey. By reshaping the neural circuits connecting the patient’s brain and limbs, these interfaces contribute to the restoration of motor functions, ultimately leading to a significant improvement in the patient’s overall quality of life. However, the current BCI primarily relies on Electroencephalogram (EEG) motor imagery (MI), which has relatively coarse recognition granularity and struggles to accurately recognize specific hand movements. To address this limitation, this paper proposes a hybrid BCI framework based on Electroencephalogram and Electromyography (EEG-EMG). The framework utilizes a combination of techniques: decoding EEG by using Graph Convolutional LSTM Networks (GCN-LSTM) to recognize the subject’s motion intention, and decoding EMG by using a convolutional neural network (CNN) to accurately identify hand movements. In EEG decoding, the correlation between channels is calculated using Standardized Permutation Mutual Information (SPMI), and the decoding process is further explained by analyzing the correlation matrix. In EMG decoding, experiments are conducted on two task paradigms, both achieving promising results. The proposed framework is validated using the publicly available WAL-EEG-GAL (Wearable interfaces for hand function recovery Electroencephalography Grasp-And-Lift) dataset, where the average classification accuracies of EEG and EMG are 0.892 and 0.954, respectively. This research aims to establish an efficient and user-friendly EEG-EMG hybrid BCI, thereby facilitating the hand rehabilitation training of stroke patients

    Development and comparison of dataglove and sEMG signal-based algorithms for the improvement of a hand gestures recognition system.

    Get PDF
    openHand gesture recognition is a topic widely discussed in literature, where several techniques are analyzed both in terms of input signal types and algorithms. The main bottleneck of the field is the generalization ability of the classifier, which becomes harder as the number of gestures to classify increases. This project has two purposes: first, it aims to develop a reliable and high-generalizability classifier, evaluating the difference in performances when using Dataglove and sEMG signals; finally, it makes considerations regarding the difficulties and advantages of developing a sEMG signal-based hand gesture recognition system, with the objective of providing indications for its improvement. To design the algorithms, data coming from a public available dataset were considered; the information were referred to 40 healthy subjects (not amputees), and for each of the 17 gestures considered, 6 repetitions were done. Finally, both conventional machine learning and deep learning approaches were used, comparing their efficiency. The results showed better performances for dataglove-based classifier, highlighting the signal informative power, while the sEMG could not provide high generalization. Interestingly, the latter signal gives better performances if it’s analyzed with classical machine learning approaches which allowed, performing feature selection, to underline both the most significative features and the most informative channels. This study confirmed the intrinsic difficulties in using the sEMG signal, but it could provide hints for the improvement of sEMG signal-based hand gesture recognition systems, by reduction of computational cost and electrodes position optimization

    Automatic segmentation of grammatical facial expressions in sign language: towards an inclusive communication experience

    Get PDF
    Nowadays, natural language processing techniques enable the development of applications that promote communication between humans and between humans and machines. Although the technology related to automated oral communication is mature and affordable, there are currently no appropriate solutions for visual-spatial languages. In the scarce efforts to automatically process sign languages, studies on non-manual gestures are rare, making it difficult to properly interpret the speeches uttered in those languages. In this paper, we present a solution for the automatic segmentation of grammatical facial expressions in sign language. This is a low-cost computational solution designed to integrate a sign language processing framework that supports the development of simple but high value-added applications for the context of universal communication. Moreover, we present a discussion of the difficulties faced by this solution to guide future research in this area

    Machine Learning for Hand Gesture Classification from Surface Electromyography Signals

    Get PDF
    Classifying hand gestures from Surface Electromyography (sEMG) is a process which has applications in human-machine interaction, rehabilitation and prosthetic control. Reduction in the cost and increase in the availability of necessary hardware over recent years has made sEMG a more viable solution for hand gesture classification. The research challenge is the development of processes to robustly and accurately predict the current gesture based on incoming sEMG data. This thesis presents a set of methods, techniques and designs that improve upon evaluation of, and performance on, the classification problem as a whole. These are brought together to set a new baseline for the potential classification. Evaluation is improved by careful choice of metrics and design of cross-validation techniques that account for data bias caused by common experimental techniques. A landmark study is re-evaluated with these improved techniques, and it is shown that data augmentation can be used to significantly improve upon the performance using conventional classification methods. A novel neural network architecture and supporting improvements are presented that further improve performance and is refined such that the network can achieve similar performance with many fewer parameters than competing designs. Supporting techniques such as subject adaptation and smoothing algorithms are then explored to improve overall performance and also provide more nuanced trade-offs with various aspects of performance, such as incurred latency and prediction smoothness. A new study is presented which compares the performance potential of medical grade electrodes and a low-cost commercial alternative showing that for a modest-sized gesture set, they can compete. The data is also used to explore data labelling in experimental design and to evaluate the numerous aspects of performance that must be traded off

    The composer as technologist : an investigation into compositional process

    Get PDF
    This work presents an investigation into compositional process. This is undertaken where a study of musical gesture, certain areas of cognitive musicology, computer vision technologies and object-orientated programming, provide the basis for a composer (author) to assume the role of a technologist and acquire knowledge and skills to that end. In particular, it focuses on the application and development of a video gesture recognition heuristic to the compositional problems posed. The result is the creation of an interactive musical work with score for violin and electronics that supports the research findings. In addition, the investigative approach into developing technology to solve musical problems that explores practical composition and aesthetic challenges is detailed

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    corecore