16 research outputs found
Exploring The Effect Of Visual And Verbal Feedback On Ballet Dance Performance In Mirrored And Non-Mirrored Environments
Indiana University-Purdue University Indianapolis (IUPUI)Since the 1800s, the ballet studio has been largely unchanged, a core feature of which is the mirror. The influence of mirrors on ballet education has been documented, and prior literature has shown negative effects on dancers’ body image, satisfaction, level of attention and performance quality. While the mirror provides immediate real-time feedback, it does not inform dancers of their errors. Tools have been developed to do so, but the design of the feedback from a bottom-up perspective has not been extensively studied. The following study aimed to assess the value of different types of feedback to inform the design of tech-augmented mirrors. University students’ ballet technique scores were evaluated on eight ballet combinations (tendue, adagio, pirouette, petit allegro, plié, degage, frappe and battement tendue), and feedback was provided to them. We accessed learning with remote domain expert to determine whether or not the system had an impact on dancers. Results revealed that the treatment with feedback was statistically significant and yielded higher performance versus without the feedback. Mirror versus non-mirror performance did not present any
score disparity indicating that users performed similarly in both conditions. A best fit possibility was seen when visual and verbal feedback were combined. We created MuscAt, a set of interconnected feedback design principles, which led us to conclude that the feasibility of remote teaching in ballet is possible
Recommended from our members
3D (embodied) projection mapping and sensing bodies : a study in interactive dance performance
This dissertation identifies the synergies between physical and virtual environments when designing for immersive experiences in interactive dance performances. The integration of virtual information in physical space is transforming our interactions and experiences with the world. By using the body and creative expression as the interface between real and virtual worlds, dance performance creates a privileged framework to research and design interactive mixed reality environments and immersive augmented architectures. The research is primarily situated in the fields of visual art and interaction design. It combines performance with transdisciplinary fields and intertwines practice with theory. The theoretical and conceptual implications involved in designing and experiencing immersive hybrid environments are analyzed using the reality–virtuality continuum. These theories helped frame the ways augmented reality architectures are achieved through the integration of dance performance with digital software and reception displays. They also helped identify the main artistic affordances and restrictions in the design of augmented reality and augmented virtuality environments for live performance. These pervasive media architectures were materialized in three field experiments, the live dance performances. Each performance was created in three different stages of conception, design and production. The first stage was to “digitize” the performer’s movement and brain activity to the virtual environment and our system. This was accomplished through the use of depth sensor cameras, 3D motion capture, and brain computer interfaces. The second stage was the creation of the computational architecture and software that aggregates the connections and mapping between the physical body and the spatial dynamics of the virtual environment. This process created real-time interactions between the performer’s behavior and motion and the real-time generative computer 3D graphics. Finally, the third stage consisted of the output modality: 3D projector based augmentation techniques were adopted in order to overlay the virtual environment onto physical space. This thesis proposes and lays out theoretical, technical, and artistic frameworks between 3D digital environments and moving bodies in dance performance. By sensing the body and the brain with the 3D virtual environments, new layers of augmentation and interactions are established, and ultimately this generates mixed reality environments for embodied improvisational self-expression.Radio-Television-Fil
Spatiotemporal analysis of human actions using RGB-D cameras
Markerless human motion analysis has strong potential to provide cost-efficient solution for action recognition and body pose estimation. Many applications including humancomputer interaction, video surveillance, content-based video indexing, and automatic annotation among others will benefit from a robust solution to these problems. Depth sensing technologies in recent years have positively changed the climate of the automated vision-based human action recognition problem, deemed to be very difficult due to the various ambiguities inherent to conventional video. In this work, first a large set of invariant spatiotemporal features is extracted from skeleton joints (retrieved from depth sensor) in motion and evaluated as baseline performance. Next we introduce a discriminative Random Decision Forest-based feature selection framework capable of reaching impressive action recognition performance when combined with a linear SVM classifier. This approach improves upon the baseline performance obtained using the whole feature set with a significantly less number of features (one tenth of the original). The approach can also be used to provide insights on the spatiotemporal dynamics of human actions. A novel therapeutic action recognition dataset (WorkoutSU-10) is presented. We took advantage of this dataset as a benchmark in our tests to evaluate the reliability of our proposed methods. Recently the dataset has been published publically as a contribution to the action recognition community. In addition, an interactive action evaluation application is developed by utilizing the proposed methods to help with real life problems such as 'fall detection' in the elderly people or automated therapy program for patients with motor disabilities
Real-Time 3-D Motion Gesture Recognition using Kinect2 as Basis for Traditional Dance Scripting
This preliminary study presents a system capable of
recognizing human gesture in real-time. The gesture is acquired from a Kinect2 sensor which provides skeleton joints represented by three-dimensional coordinate points. The model set consists of eight motion gestures is provided for basis of gesture recognition using Dynamic Time Warping (DTW) algorithm. DTW algorithm is utilized to identify in real time manner by measuring the shortest combined distances in x, y, and z coordinates in order to determined the matched gesture. It can be shown that the system
is able to recognize these 8 motions in real time with some
limitations. The findings of the this study will provide solid foundation of further research in which the ultimate goal of the research is to create system to automatically recognize sequence of motions in Indonesian traditional dances and convert them into standardized Resource Description Framework (RDF) scripts for the purpose of preserving these dances
Rehabilitation Exergames: use of motion sensing and machine learning to quantify exercise performance in healthy volunteers
Background: Performing physiotherapy exercises in front of a physiotherapist yields qualitative assessment notes and immediate feedback. However, practicing the exercises at home lacks feedback on how well or not patients are performing the prescribed tasks. The absence of proper feedback might result in patients doing the exercises incorrectly, which could worsen their condition. Objective: We propose the use of two machine learning algorithms, namely Dynamic Time Warping (DTW) and Hidden Markov Model (HMM), to quantitively assess the patient’s performance with respects to a reference. Methods: Movement data were recorded using a Kinect depth sensor, capable of detecting 25 joints in the human skeleton model, and were compared to those of a reference. 16 participants were recruited to perform four different exercises: shoulder abduction, hip abduction, lunge, and sit-to-stand. Their performance was compared to that of a physiotherapist as a reference. Results: Both algorithms show a similar trend in assessing participants' performance. However, their sensitivity level was different. While DTW was more sensitive to small changes, HMM captured a general view of the performance, being less sensitive to the details. Conclusions: The chosen algorithms demonstrated their capacity to objectively assess physical therapy performances. HMM may be more suitable in the early stages of a physiotherapy program to capture and report general performance, whilst DTW could be used later on to focus on the detail
Expressive movement generation with machine learning
Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing – developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movement’s walking direction, and the mover’s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation