Article thumbnail
Location of Repository

Robot Gesture Generation from Environmental Sounds Using Inter-modality Mapping

By Yuya Hattori, Hideki Kozima, Kazunori Komatani, Tetsuya Ogata and Hiroshi G. Okuno

Abstract

We propose a motion generation model in which robots presume the sound source of an environmental sound and imitate its motion. Sharing environmental sounds between humans and robots enables them to share environmental information. It is difficult to transmit environmental sounds in human-robot communications. We approached this problem by focusing on the iconic gestures. Concretely, robots presume the motion of the sound source object and map it to the robot motion. This method enabled robots to imitate the motion of the sound source using their bodies

Topics: Machine Learning, Robotics
Publisher: Lund University Cognitive Studies
Year: 2005
OAI identifier: oai:cogprints.org:4990

Suggested articles

Citations

  1. (2004). Automatic sound-imitation word recognition from environmental sounds focusing on ambiguity problem in determining phonemes.
  2. (2003). Comparison of techniques for environmental sound recognition.

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.