16 research outputs found

    Exploring The Effect Of Visual And Verbal Feedback On Ballet Dance Performance In Mirrored And Non-Mirrored Environments

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Since the 1800s, the ballet studio has been largely unchanged, a core feature of which is the mirror. The influence of mirrors on ballet education has been documented, and prior literature has shown negative effects on dancers’ body image, satisfaction, level of attention and performance quality. While the mirror provides immediate real-time feedback, it does not inform dancers of their errors. Tools have been developed to do so, but the design of the feedback from a bottom-up perspective has not been extensively studied. The following study aimed to assess the value of different types of feedback to inform the design of tech-augmented mirrors. University students’ ballet technique scores were evaluated on eight ballet combinations (tendue, adagio, pirouette, petit allegro, plié, degage, frappe and battement tendue), and feedback was provided to them. We accessed learning with remote domain expert to determine whether or not the system had an impact on dancers. Results revealed that the treatment with feedback was statistically significant and yielded higher performance versus without the feedback. Mirror versus non-mirror performance did not present any score disparity indicating that users performed similarly in both conditions. A best fit possibility was seen when visual and verbal feedback were combined. We created MuscAt, a set of interconnected feedback design principles, which led us to conclude that the feasibility of remote teaching in ballet is possible

    Spatiotemporal analysis of human actions using RGB-D cameras

    Get PDF
    Markerless human motion analysis has strong potential to provide cost-efficient solution for action recognition and body pose estimation. Many applications including humancomputer interaction, video surveillance, content-based video indexing, and automatic annotation among others will benefit from a robust solution to these problems. Depth sensing technologies in recent years have positively changed the climate of the automated vision-based human action recognition problem, deemed to be very difficult due to the various ambiguities inherent to conventional video. In this work, first a large set of invariant spatiotemporal features is extracted from skeleton joints (retrieved from depth sensor) in motion and evaluated as baseline performance. Next we introduce a discriminative Random Decision Forest-based feature selection framework capable of reaching impressive action recognition performance when combined with a linear SVM classifier. This approach improves upon the baseline performance obtained using the whole feature set with a significantly less number of features (one tenth of the original). The approach can also be used to provide insights on the spatiotemporal dynamics of human actions. A novel therapeutic action recognition dataset (WorkoutSU-10) is presented. We took advantage of this dataset as a benchmark in our tests to evaluate the reliability of our proposed methods. Recently the dataset has been published publically as a contribution to the action recognition community. In addition, an interactive action evaluation application is developed by utilizing the proposed methods to help with real life problems such as 'fall detection' in the elderly people or automated therapy program for patients with motor disabilities

    Real-Time 3-D Motion Gesture Recognition using Kinect2 as Basis for Traditional Dance Scripting

    Get PDF
    This preliminary study presents a system capable of recognizing human gesture in real-time. The gesture is acquired from a Kinect2 sensor which provides skeleton joints represented by three-dimensional coordinate points. The model set consists of eight motion gestures is provided for basis of gesture recognition using Dynamic Time Warping (DTW) algorithm. DTW algorithm is utilized to identify in real time manner by measuring the shortest combined distances in x, y, and z coordinates in order to determined the matched gesture. It can be shown that the system is able to recognize these 8 motions in real time with some limitations. The findings of the this study will provide solid foundation of further research in which the ultimate goal of the research is to create system to automatically recognize sequence of motions in Indonesian traditional dances and convert them into standardized Resource Description Framework (RDF) scripts for the purpose of preserving these dances

    Rehabilitation Exergames: use of motion sensing and machine learning to quantify exercise performance in healthy volunteers

    Get PDF
    Background: Performing physiotherapy exercises in front of a physiotherapist yields qualitative assessment notes and immediate feedback. However, practicing the exercises at home lacks feedback on how well or not patients are performing the prescribed tasks. The absence of proper feedback might result in patients doing the exercises incorrectly, which could worsen their condition. Objective: We propose the use of two machine learning algorithms, namely Dynamic Time Warping (DTW) and Hidden Markov Model (HMM), to quantitively assess the patient’s performance with respects to a reference. Methods: Movement data were recorded using a Kinect depth sensor, capable of detecting 25 joints in the human skeleton model, and were compared to those of a reference. 16 participants were recruited to perform four different exercises: shoulder abduction, hip abduction, lunge, and sit-to-stand. Their performance was compared to that of a physiotherapist as a reference. Results: Both algorithms show a similar trend in assessing participants' performance. However, their sensitivity level was different. While DTW was more sensitive to small changes, HMM captured a general view of the performance, being less sensitive to the details. Conclusions: The chosen algorithms demonstrated their capacity to objectively assess physical therapy performances. HMM may be more suitable in the early stages of a physiotherapy program to capture and report general performance, whilst DTW could be used later on to focus on the detail

    Expressive movement generation with machine learning

    Get PDF
    Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing – developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movement’s walking direction, and the mover’s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation
    corecore