3 research outputs found

    Analyzing and Learning Movement Through Human-Computer Co-Creative Improvisation and Data Visualization

    Get PDF
    Recent years have seen an incredible rise in the availability of household motion and video capture technologies, ranging from the humble webcam to the relatively sophisticated Kinect sensor. Naturally, this precipitated a rise in both the quantity and quality of motion capture data available on the internet. The wealth of data on the internet has caused a new interest in the field of motion data classification, the specific task of having a model classify and sort different clips of human motion. However, there is comparatively little work in the field of motion data clustering, which is an unsupervised field that may be more useful in the future as it allows for agents to recognize “categories” of motions without the need for user input or classified data. Systems that can cluster motion data focus more on “what type of motion data is this, and what is it similar to” rather than which motion is this. The LuminAI project, as described in this paper, is an example of a practical use for motion data clustering that allows the system to respond to user dance moves with a similar but different gesture. To analyze the efficacy and properties of this motion data clustering pipeline, we also propose a novel data visualization tool and the design considerations involved in its development.Undergraduat

    Feasibility of Principal Component Analysis in hand gesture recognition system

    No full text
    Nowadays actions are increasingly being handled in electronic ways, instead of physical interaction. From earlier times biometrics is used in the authentication of a person. It recognizes a person by using a human trait associated with it like eyes (by calculating the distance between the eyes) and using hand gestures, fingerprint detection, face detection etc. Advantages of using these traits for identification are that they uniquely identify a person and cannot be forgotten or lost. These are unique features of a human being which are being used widely to make the human life simpler. Hand gesture recognition system is a powerful tool that supports efficient interaction between the user and the computer. The main moto of hand gesture recognition research is to create a system which can recognise specific hand gestures and use them to convey useful information for device control. This paper presents an experimental study over the feasibility of principal component analysis in hand gesture recognition system. PCA is a powerful tool for analyzing data. The primary goal of PCA is dimensionality reduction. Frames are extracted from the Sheffield KInect Gesture (SKIG) dataset. The implementation is done by creating a training set and then training the recognizer. It uses Eigen space by processing the eigenvalues and eigenvectors of the images in training set. Euclidean distance with the threshold value is used as similarity metric to recognize the gestures. The experimental results show that PCA is feasible to be used for hand gesture recognition system
    corecore