2,946 research outputs found

    ARTeFACT Movement Thesaurus

    Get PDF
    The ARTeFACT Movement Thesaurus is a continuation of the ARTeFACT project which was developed at the University of Virginia as a means of enabling research into movement-based arts, specifically dance. The Movement Thesaurus is a major step toward providing access to movement-derived data. By using motion capture technologies we plan to provide a sophisticated, open source tool that can help make film searchable for single movements and movement phrases. The ARTeFACT Movement Thesaurus will contain over 100 codified dance movements derived from Western concert dance genres and styles from which we can develop algorithms for automatic search capabilities in film. By bringing together engineers, movement specialists, and mathematicians we will forge ahead to break new ground in movement research and take one step closer to the creation of an automated means of mining danced texts and filmed movement

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    The fourth dimension: A motoric perspective on the anxiety–performance relationship

    Get PDF
    This article focuses on raising concern that anxiety–performance relationship theory has insufficiently catered for motoric issues during, primarily, closed and self-paced skill execution (e.g., long jump and javelin throw). Following a review of current theory, we address the under-consideration of motoric issues by extending the three-dimensional model put forward by Cheng, Hardy, and Markland (2009) (‘Toward a three-dimensional conceptualization of performance anxiety: Rationale and initial measurement development, Psychology of Sport and Exercise, 10, 271–278). This fourth dimension, termed skill establishment, comprises the level and consistency of movement automaticity together with a performer's confidence in this specific process, as providing a degree of robustness against negative anxiety effects. To exemplify this motoric influence, we then offer insight regarding current theories’ misrepresentation that a self-focus of attention toward an already well-learned skill always leads to a negative performance effect. In doing so, we draw upon applied literature to distinguish between positive and negative self-foci and suggest that on what and how a performer directs their attention is crucial to the interaction with skill establishment and, therefore, performance. Finally, implications for skill acquisition research are provided. Accordingly, we suggest a positive potential flow from applied/translational to fundamental/theory-generating research in sport which can serve to freshen and usefully redirect investigation into this long-considered but still insufficiently understood concept

    Similarity, Retrieval, and Classification of Motion Capture Data

    Get PDF
    Three-dimensional motion capture data is a digital representation of the complex spatio-temporal structure of human motion. Mocap data is widely used for the synthesis of realistic computer-generated characters in data-driven computer animation and also plays an important role in motion analysis tasks such as activity recognition. Both for efficiency and cost reasons, methods for the reuse of large collections of motion clips are gaining in importance in the field of computer animation. Here, an active field of research is the application of morphing and blending techniques for the creation of new, realistic motions from prerecorded motion clips. This requires the identification and extraction of logically related motions scattered within some data set. Such content-based retrieval of motion capture data, which is a central topic of this thesis, constitutes a difficult problem due to possible spatio-temporal deformations between logically related motions. Recent approaches to motion retrieval apply techniques such as dynamic time warping, which, however, are not applicable to large data sets due to their quadratic space and time complexity. In our approach, we introduce various kinds of relational features describing boolean geometric relations between specified body points and show how these features induce a temporal segmentation of motion capture data streams. By incorporating spatio-temporal invariance into the relational features and induced segments, we are able to adopt indexing methods allowing for flexible and efficient content-based retrieval in large motion capture databases. As a further application of relational motion features, a new method for fully automatic motion classification and retrieval is presented. We introduce the concept of motion templates (MTs), by which the spatio-temporal characteristics of an entire motion class can be learned from training data, yielding an explicit, compact matrix representation. The resulting class MT has a direct, semantic interpretation, and it can be manually edited, mixed, combined with other MTs, extended, and restricted. Furthermore, a class MT exhibits the characteristic as well as the variational aspects of the underlying motion class at a semantically high level. Classification is then performed by comparing a set of precomputed class MTs with unknown motion data and labeling matching portions with the respective motion class label. Here, the crucial point is that the variational (hence uncharacteristic) motion aspects encoded in the class MT are automatically masked out in the comparison, which can be thought of as locally adaptive feature selection

    Basic gestures as spatiotemporal reference frames for repetitive dance/music patterns in samba and charleston

    Get PDF
    THE GOAL OF THE PRESENT STUDY IS TO GAIN BETTER insight into how dancers establish, through dancing, a spatiotemporal reference frame in synchrony with musical cues. With the aim of achieving this, repetitive dance patterns of samba and Charleston were recorded using a three-dimensional motion capture system. Geometric patterns then were extracted from each joint of the dancer's body. The method uses a body-centered reference frame and decomposes the movement into non-orthogonal periodicities that match periods of the musical meter. Musical cues (such as meter and loudness) as well as action-based cues (such as velocity) can be projected onto the patterns, thus providing spatiotemporal reference frames, or 'basic gestures,' for action-perception couplings. Conceptually speaking, the spatiotemporal reference frames control minimum effort points in action-perception couplings. They reside as memory patterns in the mental and/or motor domains, ready to be dynamically transformed in dance movements. The present study raises a number of hypotheses related to spatial cognition that may serve as guiding principles for future dance/music studies

    DanceMoves: A Visual Analytics Tool for Dance Movement Analysis

    Full text link
    Analyzing body movement as a means of expression is of interest in diverse areas, such as dance, sports, films, as well as anthropology or archaeology. In particular, in choreography, body movements are at the core of artistic expression. Dance moves are composed of spatial and temporal structures that are difficult to address without interactive visual data analysis tools. We present a visual analytics solution that allows the user to get an overview of, compare, and visually search dance move features in video archives. With the help of similarity measures, a user can compare dance moves and assess dance poses. We illustrate our approach through three use cases and an analysis of the performance of our similarity measures. The expert feedback and the experimental results show that 75% to 80% of dance moves can correctly be categorized. Domain experts recognize great potential in this standardized analysis. Comparative and motion analysis allows them to get detailed insights into temporal and spatial development of motion patterns and poses

    The spatiotemporal representation of dance and music gestures using topological gesture analysis (TGA)

    Get PDF
    SPATIOTEMPORAL GESTURES IN MUSIC AND DANCE HAVE been approached using both qualitative and quantitative research methods. Applying quantitative methods has offered new perspectives but imposed several constraints such as artificial metric systems, weak links with qualitative information, and incomplete accounts of variability. In this study, we tackle these problems using concepts from topology to analyze gestural relationships in space. The Topological Gesture Analysis (TGA) relies on the projection of musical cues onto gesture trajectories, which generates point clouds in a three-dimensional space. Point clouds can be interpreted as topologies equipped with musical qualities, which gives us an idea about the relationships between gesture, space, and music. Using this method, we investigate the relationships between musical meter, dance style, and expertise in two popular dances (samba and Charleston). The results show how musical meter is encoded in the dancer's space and how relevant information about styles and expertise can be revealed by means of simple topological relationships
    • 

    corecore