3 research outputs found

    A robot uses its own microphone to synchronize its steps to musical beats while scatting and singing

    Full text link
    Abstract—Musical beat tracking is one of the effective technologies for human-robot interaction such as musical ses-sions. Since such interaction should be performed in various environments in a natural way, musical beat tracking for a robot should cope with noise sources such as environmental noise, its own motor noises, and self voices, by using its own microphone. This paper addresses a musical beat tracking robot which can step, scat and sing according to musical beats by using its own microphone. To realize such a robot, we propose a robust beat tracking method by introducing two key techniques, that is, spectro-temporal pattern matching and echo cancellation. The former realizes robust tempo estimation with a shorter window length, thus, it can quickly adapt to tempo changes. The latter is effective to cancel self noises such as stepping, scatting, and singing. We implemented the proposed beat tracking method for Honda ASIMO. Experimental results showed ten times faster adaptation to tempo changes and high robustness in beat tracking for stepping, scatting and singing noises. We also demonstrated the robot times its steps while scatting or singing to musical beats. I

    A two-layer model for behavior and dialogue planning in conversational service robots

    No full text
    Abstract — This paper presents a model for the behavior and dialogue planning module of conversational service robots. Most of the previously built conversational robots cannot perform dialogue management necessary for accurately recognizing human intentions and providing information to humans. This model integrates robot behavior planning models with spoken dialogue management that is robust enough to engage in mixedinitiative dialogues in specific domains. It has two layers; the upper layer is responsible for global task planning using hierarchical planning and the lower layer engages in local planning by utilizing modules called experts, which are specialized for performing certain kind of tasks by performing physical actions and engaging in dialogues. This model enables switching and canceling tasks based on recognized human intentions. A preliminary implementation of the model, which has been integrated with Honda ASIMO, has shown its effectiveness. Index Terms — conversational robot, service robot, behavior and dialogue planning, dialogue management I

    A Biped Robot that Keeps Steps in Time with Musical Beats while Listening to Music with Its Own Ears

    No full text
    Abstract — We aim at enabling a biped robot to interact with humans through real-world music in daily-life environments, e.g., to autonomously keep its steps (stamps) in time with musical beats. To achieve this, the robot should be able to robustly predict the beat times in real time while listening to musical performance with its own ears (head-embedded microphones). However, this has not previously been addressed in most studies on music-synchronized robots due to the difficulty in predicting the beat times in real-world music. To solve this problem, we implemented a beat-tracking method developed in the field of music information processing. The predicted beat times are then used by a feedback-control method that adjusts the robot’s step intervals to synchronize its steps in time with the beats. The experimental results show that the robot can adjust its steps in time with the beat times as the tempo changes. The resulting robot needed about 25 [s] to recognize the tempo change after it and then synchronize its steps. I
    corecore