582 research outputs found

    Dance-the-music : an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Get PDF
    In this article, a computational platform is presented, entitled “Dance-the-Music”, that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers’ models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method can determine the quality of a student’s performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    Guidance, control, and navigation of a quadrotor choreography for a real-time musician-in-the-loop performance

    Get PDF
    This thesis proposes a systematic methodology for the guidance, control, and navigation, of a quadrotor to perform a choreographed dance in real-time as a function of the music performed by a musician. The four main components of a human choreography (namely the notions of space, shape, time and structure) are analyzed and mathematically formulated for a robotic performance. This allows for a real-time interaction with a musician without prior knowledge of the music, and based on the pitch of the acoustic signal. A novel approach for mapping music features to trajectory parameters is proposed, as well as the design of a trajectory shaping filter based on two coefficients that are set in real-time by an artist through a MIDI foot-pedal board. The two coefficients are inspired by a mathematical description of acoustic signals. The proposed approach maps motion parameters and the music to trajectory motifs that are then switched in harmony with the music chord structure. The mathematical formulation of a quadrotor choreography is simulated. The simulation relies on the linearized dynamics and the physical properties of a quadrotor, and produces a graphical representation of the quadrotor choreography. To validate the control system, the position of the quadrotor is compared with the desired position. To measure the effectiveness of the link between music and the position of the quadrotor, the trajectory generator system is inverted to generate a sequence of music pitches. The melodic phrase generated by the position of the quadrotor is played back to the musician. A real-time musical interaction occurs between the musician and the quadrotor. Simulation results show that the proposed methodology yields an effective real-time performance for a quadrotor choreography

    Observations on Experience and Flow in Movement-Based Interaction

    Get PDF
    Movement-based interfaces assume that their users move. Users have to perform exercises, they have to dance, they have to golf or football, or they want to train particular bodily skills. Many examples of those interfaces exist, sometimes asking for subtle interaction between user and interface and sometimes asking for ‘brute force’ interaction between user and interface. Often these interfaces mediate between players of a game. Obviously, one of the players may be a virtual human. We embed this interface research in ambient intelligence and entertainment computing research, and the interfaces we consider are not only mediating, but they also ‘add’ intelligence to the interaction. Intelligent movement-based interfaces, being able to know and learn about their users, should also be able to provide means to keep their users engaged in the interaction. Issues that will be discussed in this chapter are ‘flow’ and ‘immersion’ for movement-based interfaces and we look at the possible role of interaction synchrony to measure and support engagement

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    Training Physics-based Controllers for Articulated Characters with Deep Reinforcement Learning

    Get PDF
    In this thesis, two different applications are discussed for using machine learning techniques to train coordinated motion controllers in arbitrary characters in absence of motion capture data. The methods highlight the resourcefulness of physical simulations to generate synthetic and generic motion data that can be used to learn various targeted skills. First, we present an unsupervised method for learning loco-motion skills in virtual characters from a low dimensional latent space which captures the coordination between multiple joints. We use a technique called motor babble, wherein a character interacts with its environment by actuating its joints through uncoordinated, low-level (motor) excitation, resulting in a corpus of motion data from which a manifold latent space can be extracted. Using reinforcement learning, we then train the character to learn locomotion (such as walking or running) in the low-dimensional latent space instead of the full-dimensional joint action space. The thesis also presents an end-to-end automated framework for training physics-based characters to rhythmically dance to user-input songs. A generative adversarial network (GAN) architecture is proposed that learns to generate physically stable dance moves through repeated interactions with the environment. These moves are then used to construct a dance network that can be used for choreography. Using DRL, the character is then trained to perform these moves, without losing balance and rhythm, in the presence of physical forces such as gravity and friction

    Haptic communication between partner dancers and swing as a finite state machine

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Vita.Includes bibliographical references (p. 129-138).To see two expert partners, one leading and the other following, swing dance together is to watch a remarkable two-agent communication and control system in action. Even blindfolded, the follower can decode the leader's moves from haptic cues. The leader composes the dance from the vocabulary of known moves so as to complement the music he is dancing to. Systematically addressing questions about partner dance communication is of scientific interest and could improve human-robotic interaction, and imitating the leader's choreographic skill is an engineering problem with applications beyond the dance domain. Swing dance choreography is a finite state machine, with moves that transition between a small number of poses. Two automated choreographers are presented. One uses an optimization and randomization scheme to compose dances by a sequence of shortest path problems, with edge lengths measuring the dissimilarity of dance moves to each bar of music. The other solves a two-player zero-sum game between the choreographer and a judge. Choosing moves at random from among moves that are good enough is rational under the game model.(cont.) Further, experiments presenting conflicting musical environments to two partners demonstrate that although musical expression clearly guides the leader's choice of moves, the follower need not hear the same music to properly decode the leader's signals. Dancers embody gentle interaction, in which each participant extends the capabilities of the other, and their cooperation is facilitated by a shared understanding of the motions to be performed. To demonstrate that followers use their understanding of the move vocabulary to interact better with their leaders, an experiment paired a haptic robot leader with human followers in a haptically cued dance to a swing music soundtrack. The subjects' performance differed significantly between instances when the subjects could determine which move was being led and instances when the subjects could not determine what the next move would be. Also, two-person teams that cooperated haptically to perform cyclical aiming tasks showed improvements in the Fitts' law or Schmidt's law speed-accuracy tradeoff consistent with a novel endpoint compromise hypothesis about haptic collaboration.by Sommer Elizabeth Gentry.Ph.D

    Towards an interactive framework for robot dancing applications

    Get PDF
    Estågio realizado no INESC-Porto e orientado pelo Prof. Doutor Fabien GouyonTese de mestrado integrado. Engenharia Electrotécnica e de Computadores - Major TelecomunicaçÔes. Faculdade de Engenharia. Universidade do Porto. 200
    • 

    corecore