176 research outputs found

    Non-choreographed Robot Dance

    Get PDF
    This research aims at investigating the difficulties of enabling the humanoid robot Nao to dance on music. The focus is on creating a dance that is not predefined by the researcher, but which emerges from the music played to the robot. Such an undertaking can not be fully tackled in a small-scale project. Nevertheless, rather than focusing on a subtask of the topic, this research tries to maintain a holistic view on the subject, and tries to provide a framework based on which work in this area can be continued in the future. The need for this research comes from the fact that current approaches to robot dance in general, and Nao dance in particular, focus on predefined dances built by the researcher. The main goal of this project is to move away from the current choreographed approaches to Nao dance, and investigate how to make the robot dance in a non-predefined fashion. Moreover, given the fact that previous research has focused mainly on the analysis of musical beat, a secondary goal of this project is to focus not only on the beat, but other elements of music as well, in order to create the dance

    Robot Learning Dual-Arm Manipulation Tasks by Trial-and-Error and Multiple Human Demonstrations

    Get PDF
    In robotics, there is a need of an interactive and expedite learning method as experience is expensive. In this research, we propose two different methods to make a humanoid robot learn manipulation tasks: Learning by trial-and-error, and Learning from demonstrations. Just like the way a child learns a new task assigned to him by trying all possible alternatives and further learning from his mistakes, the robot learns in the same manner in learning by trial-and error. We used Q-learning algorithm, in which the robot tries all the possible ways to do a task and creates a matrix that consists of Q-values based on the rewards it received for the actions performed. Using this method, the robot was made to learn dance moves based on a music track. Robot Learning from Demonstrations (RLfD) enable a human user to add new capabilities to a robot in an intuitive manner without explicitly reprogramming it. In this method, the robot learns skill from demonstrations performed by a human teacher. The robot extracts features from each demonstration called as key-points and learns a model of the demonstrated task or trajectory using Hidden Markov Model (HMM). The learned model is further used to produce a generalized trajectory. In the end, we discuss the differences between two developed systems and make conclusions based on the experiments performed

    A robot uses its own microphone to synchronize its steps to musical beats while scatting and singing

    Full text link
    Abstract—Musical beat tracking is one of the effective technologies for human-robot interaction such as musical ses-sions. Since such interaction should be performed in various environments in a natural way, musical beat tracking for a robot should cope with noise sources such as environmental noise, its own motor noises, and self voices, by using its own microphone. This paper addresses a musical beat tracking robot which can step, scat and sing according to musical beats by using its own microphone. To realize such a robot, we propose a robust beat tracking method by introducing two key techniques, that is, spectro-temporal pattern matching and echo cancellation. The former realizes robust tempo estimation with a shorter window length, thus, it can quickly adapt to tempo changes. The latter is effective to cancel self noises such as stepping, scatting, and singing. We implemented the proposed beat tracking method for Honda ASIMO. Experimental results showed ten times faster adaptation to tempo changes and high robustness in beat tracking for stepping, scatting and singing noises. We also demonstrated the robot times its steps while scatting or singing to musical beats. I

    Towards an interactive framework for robot dancing applications

    Get PDF
    Estágio realizado no INESC-Porto e orientado pelo Prof. Doutor Fabien GouyonTese de mestrado integrado. Engenharia Electrotécnica e de Computadores - Major Telecomunicações. Faculdade de Engenharia. Universidade do Porto. 200

    Implementation of a real-time dance ability for mini maggie

    Get PDF
    The increasing rise of robotics and the growing interest in some fields like the human-robot interaction has triggered the birth a new generation of social robots that develop and expand their abilities. Much late research has focused on the dance ability, what has caused it to experience a very fast evolution. Nonetheless, real-time dance ability still remains immature in many areas such as online beat tracking and dynamic creation of choreographies. The purpose of this thesis is to teach the robot Mini Maggie how to dance real-time synchronously with the rhythm of music from the microphone. The number of joints of our robot Mini Maggie is low and, therefore, our main objective is not to execute very complex dances since our range of action is small. However, Mini Maggie should react with a low enough delay since we want a real-time system. It should resynchronise as well if the song changes or there is a sudden tempo change in the same song. To achieve that, Mini Maggie has two subsystems: a beat tracking subsystem, which tell us the time instants of detected beats and a dance subsystem, which makes Mini dance at those time instants. In the beat tracking system, first, the input microphone signal is processed in order to extract the onset strength at each time instant, which is directly related to the beat probability at that time instant. Then, the onset strength signal will be delivered to two blocks. The music period estimator block will extract the periodicities of the onset strength signal by computing the 4-cycled autocorrelation, a type of autocorrelation in which we do not only compute the similarity of a signal by a displacement of one single period but also of its first 4 multiples. Finally, the beat tracker takes the onset strength signal and the estimated periods real-time and decides at which time instants there should be a beat. The dance subsystem will then execute different dance steps according to several prestored choreographies thanks to Mini Maggie’s dynamixel module, which is in charge of more low-level management of each joint. With this system we have taught Mini Maggie to dance for a general set of music genres with enough reliability. Reliability of this system generally remains stable among different music styles but if there is a clear lack of minimal stability in rhythm, as it happens in very expressive and subjectively interpreted classical music, our system is not able to track its beats. Mini Maggie’s dancing was adjusted so that it was appealing even though there was a very limited range of possible movements, due to the lack of degrees of freedom.Ingeniería de Sistemas Audiovisuale

    Automated Motion Synthesis for Virtual Choreography

    Get PDF
    In this paper, we present a technique to automati-cally synthesize dancing moves for arbitrary songs. Our current implementation is for virtual characters, but it is easy to use the same algorithms for entertainer robots, such as robotic dancers, which fits very well to this year’s conference theme. Our technique is based on analyzing a musical tune (can be a song or melody) and synthesizing a motion for the virtual character where the character’s movement synchronizes to the musical beats. In order to analyze beats of the tune, we developed a fast and novel algorithm. Our motion synthesis algorithm analyze library of stock motions and generates new sequences of movements that were not described in the library. We present two algorithms to synchronize dance moves and musical beats: a fast greedy algorithm, and a genetic algorithm. Our experimental results show that we can generate new sequences of dance figures in which the dancer reacts to music and dances in synchronization with the music

    Developing a Noise-Robust Beat Learning Algorithm for Music-Information Retrieval

    Get PDF
    The field of Music-Information Retrieval (Music-IR) involves the development of algorithms that can analyze musical audio and extract various high-level musical features. Many such algorithms have been developed, and systems now exist that can reliably identify features such as beat locations, tempo, and rhythm from musical sources. These features in turn are used to assist in a variety of music-related tasks ranging from automatically creating playlists that match specified criteria to synchronizing various elements, such as computer graphics, with a performance. These Music-IR systems thus help humans to enjoy and interact with music. While current systems for identifying beats in music are have found widespread utility, most of them have been developed on music that is relatively free of acoustic noise. Much of the music that humans listen to, though, is performed in noisy environments. People often enjoy music in crowded clubs and noisy rooms, but this music is much more challenging for Music-IR systems to analyze, and current beat trackers generally perform poorly on musical audio heard in such conditions. If our algorithms could accurately process this music, though, it would enable this music too to be used in applications such as automatic song selection, which are currently limited to music taken directly from professionally-produced digital files that have little acoustic noise. Noise-robust beat learning algorithms would also allow for additional types of performance augmentation which create noise and thus cannot be used with current algorithms. Such a system, for instance, could aid robots in performing synchronously with music, whereas current systems are generally unable to accurately process audio heard in conjunction with noisy robot motors. This work aims to present a new approach for learning beats and identifying both their temporal locations and their spectral characteristics for music recorded in the presence of noise. First, datasets of musical audio recorded in environments with multiple types of noise were collected and annotated. Noise sources used for these datasets included HVAC sounds from a room, chatter from a crowded bar, and fans and motor noises from a moving robot. Second, an algorithm for learning and locating musical beats was developed which incorporates signal processing and machine learning techniques such as Harmonic-Percussive Source Separation and Probabilistic Latent Component Analysis. A representation of the musical signal called the stacked spectrogram was also utilized in order to better represent the time-varying nature of the beats. Unlike many current systems, which assume that the beat locations will be correlated with some hand-crafted features, this system learns the beats directly from the acoustic signal. Finally, the algorithm was tested against several state-of-the-art beat trackers on the audio datasets. The resultant system was found to significantly outperform the state-of-the-art when evaluated on audio played in realistically noisy conditions.Ph.D., Electrical Engineering -- Drexel University, 201

    Study on Perception-Action Scheme for Human-Robot Musical Interaction in Wind Instrumental Play

    Get PDF
    制度:新 ; 報告番号:甲3337号 ; 学位の種類:博士(工学) ; 授与年月日:2011/2/25 ; 早大学位記番号:新564

    Beating-time gestures imitation learning for humanoid robots

    Get PDF
    Beating-time gestures are movement patterns of the hand swaying along with music, thereby indicating accented musical pulses. The spatiotemporal configuration of these patterns makes it diÿcult to analyse and model them. In this paper we present an innovative modelling approach that is based upon imitation learning or Programming by Demonstration (PbD). Our approach - based on Dirichlet Process Mixture Models, Hidden Markov Models, Dynamic Time Warping, and non-uniform cubic spline regression - is particularly innovative as it handles spatial and temporal variability by the generation of a generalised trajectory from a set of periodically repeated movements. Although not within the scope of our study, our procedures may be implemented for the sake of controlling movement behaviour of robots and avatar animations in response to music
    corecore