Location of Repository

Author manuscript, published in "IEEE International Conference on Humanoid Robotics (Humanoids) (2012)" Online Multimodal Speaker Detection for Humanoid Robots

By Jordi Sanchez-riera, Xavier Alameda-pineda, Johannes Wienke and Antoine Deleforge Soraya Arias

Abstract

Abstract — In this paper we address the problem of audiovisual speaker detection. We introduce an online system working on the humanoid robot NAO. The scene is perceived with two cameras and two microphones. A multimodal Gaussian mixture model (mGMM) fuses the information extracted from the auditory and visual sensors and detects the most probable audio-visual object, e.g., a person emitting a sound, in the 3D space. The system is implemented on top of a platformindependent middleware and it is able to process the information online (17Hz). A detailed description of the system and its implementation are provided, with special emphasis on the online processing issues and the proposed solutions. Experimental validation, performed with five different scenarios, show that that the proposed method opens the door to robust humanrobot interaction scenarios. I

Year: 2012
OAI identifier: oai:CiteSeerX.psu:10.1.1.370.8504
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://hal.inria.fr/docs/00/76... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.