850 research outputs found

    Implementation of a real-time dance ability for mini maggie

    Get PDF
    The increasing rise of robotics and the growing interest in some fields like the human-robot interaction has triggered the birth a new generation of social robots that develop and expand their abilities. Much late research has focused on the dance ability, what has caused it to experience a very fast evolution. Nonetheless, real-time dance ability still remains immature in many areas such as online beat tracking and dynamic creation of choreographies. The purpose of this thesis is to teach the robot Mini Maggie how to dance real-time synchronously with the rhythm of music from the microphone. The number of joints of our robot Mini Maggie is low and, therefore, our main objective is not to execute very complex dances since our range of action is small. However, Mini Maggie should react with a low enough delay since we want a real-time system. It should resynchronise as well if the song changes or there is a sudden tempo change in the same song. To achieve that, Mini Maggie has two subsystems: a beat tracking subsystem, which tell us the time instants of detected beats and a dance subsystem, which makes Mini dance at those time instants. In the beat tracking system, first, the input microphone signal is processed in order to extract the onset strength at each time instant, which is directly related to the beat probability at that time instant. Then, the onset strength signal will be delivered to two blocks. The music period estimator block will extract the periodicities of the onset strength signal by computing the 4-cycled autocorrelation, a type of autocorrelation in which we do not only compute the similarity of a signal by a displacement of one single period but also of its first 4 multiples. Finally, the beat tracker takes the onset strength signal and the estimated periods real-time and decides at which time instants there should be a beat. The dance subsystem will then execute different dance steps according to several prestored choreographies thanks to Mini Maggie’s dynamixel module, which is in charge of more low-level management of each joint. With this system we have taught Mini Maggie to dance for a general set of music genres with enough reliability. Reliability of this system generally remains stable among different music styles but if there is a clear lack of minimal stability in rhythm, as it happens in very expressive and subjectively interpreted classical music, our system is not able to track its beats. Mini Maggie’s dancing was adjusted so that it was appealing even though there was a very limited range of possible movements, due to the lack of degrees of freedom.Ingeniería de Sistemas Audiovisuale

    Towards a framework to make robots learn to dance

    Get PDF
    A key motive of human-robot interaction is to make robots and humans interact through different aspects of the real world. As robots become more and more realistic in appearance, so has the desire for them to exhibit complex behaviours. A growing area of interest in terms of complex behaviour is robot dancing. Dance is an entertaining activity that is enjoyed either by being the performer or the spectator. Each dance contain fundamental features that make-up a dance. It is the curiosity for some researchers to model such an activity for robots to perform in human social environments. From current research, most dancing robots are pre-programmed with dance motions and few have the ability to generate their own dance or alter their movements according to human responses while dancing. This thesis explores the question Can a robot learn to dance? . A dancing framework is proposed to address this question. The Sarsa algorithm and the Softmax algorithm from traditional reinforcement learning form part of the dancing framework to enable a virtual robot learn and adapt to appropriate dance behaviours. The robot follows a progressive approach, utilising the knowledge obtained at each stage of its development to improve the dances that it generates. The proposed framework addresses three stages of development of a robot s dance: learning ability; creative ability of dance motions, and adaptive ability to human preferences. Learning ability is the ability to make a robot gradually perform the desired dance behaviours. Creative ability is the idea of the robot generating its own dance motions, and structuring them into a dance. Adaptive ability is where the robot changes its dance in response to human feedback. A number of experiments have been conducted to explore these challenges, and verified that the quality of the robot dance can be improved through each stage of the robot s development

    Expressive Motion Synthesis for Robot Actors in Robot Theatre

    Get PDF
    Lately, personal and entertainment robotics are becoming more and more common. In this thesis, the application of entertainment robots in the context of a Robot Theatre is studied. Specifically, the thesis focuses on the synthesis of expressive movements or animations for the robot performers (Robot Actors). The novel paradigm emerged from computer animation is to represent the motion data as a set of signals. Thus, preprogrammed motion data can be quickly modified using common signal processing techniques such as multiresolution filtering and spectral analysis. However, manual adjustments of the filtering and spectral methods parameters, and good artistic skills are still required to obtain the desired expressions in the resulting animation. Music contains timing, timbre and rhythm information which humans can translate into affect, and express the affect through movement dynamics, such as in dancing. Music data is then assumed to contain affective information which can be expressed in the movements of a robot. In this thesis, music data is used as input signal to generate motion data (Dance) and to modify a sequence of pre-programmed motion data (Scenario) for a custom-made Lynxmotion robot and a KHR-1 robot, respectively. The music data in MIDI format is parsed for timing and melodic information, which are then mapped to joint angle values. Surveys were done to validate the usefulness and contribution of music signals to add expressiveness to the movements of a robot for the Robot Theatre application

    Haptic communication between partner dancers and swing as a finite state machine

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Vita.Includes bibliographical references (p. 129-138).To see two expert partners, one leading and the other following, swing dance together is to watch a remarkable two-agent communication and control system in action. Even blindfolded, the follower can decode the leader's moves from haptic cues. The leader composes the dance from the vocabulary of known moves so as to complement the music he is dancing to. Systematically addressing questions about partner dance communication is of scientific interest and could improve human-robotic interaction, and imitating the leader's choreographic skill is an engineering problem with applications beyond the dance domain. Swing dance choreography is a finite state machine, with moves that transition between a small number of poses. Two automated choreographers are presented. One uses an optimization and randomization scheme to compose dances by a sequence of shortest path problems, with edge lengths measuring the dissimilarity of dance moves to each bar of music. The other solves a two-player zero-sum game between the choreographer and a judge. Choosing moves at random from among moves that are good enough is rational under the game model.(cont.) Further, experiments presenting conflicting musical environments to two partners demonstrate that although musical expression clearly guides the leader's choice of moves, the follower need not hear the same music to properly decode the leader's signals. Dancers embody gentle interaction, in which each participant extends the capabilities of the other, and their cooperation is facilitated by a shared understanding of the motions to be performed. To demonstrate that followers use their understanding of the move vocabulary to interact better with their leaders, an experiment paired a haptic robot leader with human followers in a haptically cued dance to a swing music soundtrack. The subjects' performance differed significantly between instances when the subjects could determine which move was being led and instances when the subjects could not determine what the next move would be. Also, two-person teams that cooperated haptically to perform cyclical aiming tasks showed improvements in the Fitts' law or Schmidt's law speed-accuracy tradeoff consistent with a novel endpoint compromise hypothesis about haptic collaboration.by Sommer Elizabeth Gentry.Ph.D

    The experience of creating dance to NAO robot

    Get PDF
    Разработка танцев и обучение роботов открывает новые возможности в области социальной робототехники. Танцующие роботы находятся сегодня в центре исследовательского интереса. В данной статье предложено описание опыта по созданию анимации танца для антропоморфного робота Nao v 1.14.4. Авторы предлагают к рассмотрению концептуальные и технические аспекты реализации анимации танца для робота

    Fall Prediction for New Sequences of Motions

    Full text link
    Abstract. Motions reinforce meanings in human-robot communication, when they are relevant and initiated at the right times. Given a task of using motions for an autonomous humanoid robot to communicate, different sequences of relevant motions are generated from the motion library. Each motion in the motion library is stable, but a sequence may cause the robot to be unstable and fall. We are interested in predicting if a sequence of motions will result in a fall, without executing the sequence on the robot. We contribute a novel algorithm, ProFeaSM, that uses only body angles collected during the execution of single motions and interpolations between pairs of motions, to predict whether a sequence will cause the robot to fall. We demonstrate the efficacy of ProFeaSM on the NAO humanoid robot in a real-time simulator, Webots, and on a real NAO and explore the trade-off between precision and recall

    Multiple Visual Feature Integration Based Automatic Aesthetics Evaluation of Robotic Dance Motions

    Get PDF
    Imitation of human behaviors is one of the effective ways to develop artificial intelligence. Human dancers, standing in front of a mirror, always achieve autonomous aesthetics evaluation on their own dance motions, which are observed from the mirror. Meanwhile, in the visual aesthetics cognition of human brains, space and shape are two important visual elements perceived from motions. Inspired by the above facts, this paper proposes a novel mechanism of automatic aesthetics evaluation of robotic dance motions based on multiple visual feature integration. In the mechanism, a video of robotic dance motion is firstly converted into several kinds of motion history images, and then a spatial feature (ripple space coding) and shape features (Zernike moment and curvature-based Fourier descriptors) are extracted from the optimized motion history images. Based on feature integration, a homogeneous ensemble classifier, which uses three different random forests, is deployed to build a machine aesthetics model, aiming to make the machine possess human aesthetic ability. The feasibility of the proposed mechanism has been verified by simulation experiments, and the experimental results show that our ensemble classifier can achieve a high correct ratio of aesthetics evaluation of 75%. The performance of our mechanism is superior to those of the existing approaches
    corecore