624 research outputs found

    MirrorGen Wearable Gesture Recognition using Synthetic Videos

    Get PDF
    abstract: In recent years, deep learning systems have outperformed traditional machine learning systems in most domains. There has been a lot of research recently in the field of hand gesture recognition using wearable sensors due to the numerous advantages these systems have over vision-based ones. However, due to the lack of extensive datasets and the nature of the Inertial Measurement Unit (IMU) data, there are difficulties in applying deep learning techniques to them. Although many machine learning models have good accuracy, most of them assume that training data is available for every user while other works that do not require user data have lower accuracies. MirrorGen is a technique which uses wearable sensor data and generates synthetic videos using hand movements and it mitigates the traditional challenges of vision based recognition such as occlusion, lighting restrictions, lack of viewpoint variations, and environmental noise. In addition, MirrorGen allows for user-independent recognition involving minimal human effort during data collection. It also helps leverage the advances in vision-based recognition by using various techniques like optical flow extraction, 3D convolution. Projecting the orientation (IMU) information to a video helps in gaining position information of the hands. To validate these claims, we perform entropy analysis on various configurations such as raw data, stick model, hand model and real video. Human hand model is found to have an optimal entropy that helps in achieving user independent recognition. It also serves as a pervasive option as opposed to a video-based recognition. The average user independent recognition accuracy of 99.03% was achieved for a sign language dataset with 59 different users, 20 different signs with 20 repetitions each for a total of 23k training instances. Moreover, synthetic videos can be used to augment real videos to improve recognition accuracy.Dissertation/ThesisMasters Thesis Computer Science 201

    Recognizing specific errors in human physical exercise performance with Microsoft Kinect

    Get PDF
    The automatic assessment of human physical activity performance is useful for a number of beneficial systems including in-home rehabilitation monitoring systems and Reactive Virtual Trainers (RVTs). RVTs have the potential to replace expensive personal trainers to promote healthy activity and help teach correct form to prevent injury. Additionally, unobtrusive sensor technologies for human tracking, especially those that incorporate depth sensing such as Microsoft Kinect, have become effective, affordable, and commonplace. The work of this thesis contributes towards the development of RVT systems by using RGB-D and tracked skeletal data collected with Microsoft Kinect to assess human performance of physical exercises. I collected data from eight volunteers performing three exercises: jumping jacks, arm circles, and arm curls. I labeled each exercise repetition as either correct or one or more of a select number of predefined erroneous forms. I trained a statistical model using the labeled samples and developed a system that recognizes specific structural and temporal errors in a test set of unlabeled samples. I obtained classification accuracies for multiple implementations and assess the effectiveness of the use of various features of the skeletal data as well as various prediction models

    Gesture based persuasive interfaces for public ambient displays

    Get PDF
    Dissertação de Mestrado em Engenharia Informática 2º Semestre, 2011/2012This Master thesis studies how Public Ambient Displays (PAD) can be used as a tool to achieve behaviour change, through persuasive technology. In order to reach the goals of the thesis, an interactive public ambient display system called Motion-based Ambient Interactive Display (MAID) was developed. MAID is driven to motivate behaviour changes regarding domestic energy consumption, through a persuasive game interface based on gesture recognition technology. The developed prototype guides players through the different rooms of a house, where they have to find out what is wrong and practice the correct actions to save energy, using similar gestures to the ones they would use in real life to achieve the same goals. The system provides feedback regarding the consequences of each action, in order to make users aware of the consequences of their actions. The implementation of MAID is based on a purpose built, highly configurable and modular framework. It allows the administrator to fine tune and tweak the application to the necessities of the setup location constraints, by adjusting basic display properties, change image content or even modify the scripted gameplay itself. The scripted game system is flexible enough to allow the repurposing of the framework, beyond the previously defined theme, for future studies. The MAID was subjected to user testing, in order to show that it is possible to create a persuasive PAD interface, using seamless interaction methods, with the currently available technology, and use it to spread awareness of a cause, leading to behaviour change.Fundação para a Ciência e Tecnologia - project DEAP (PTDC/AAC-AMB/104834/2008); CITI/DI/FCT/UNL (PEst-OE/EEI/UI0527/201
    • …
    corecore