Skip to main content
Article thumbnail
Location of Repository

Distant Speech Recognition for Home Automation: Preliminary Experimental Results in a Smart Home

By Benjamin Lecouteux, Michel Vacher and François Portet

Abstract

Abstract—This paper presents a study that is part of the Sweet-Home project which aims at developing a new home automation system based on voice command. The study focused on two tasks: distant speech recognition and sentence spotting (e.g., recognition of domotic orders). Regarding the first task, different combinations of ASR systems, language and acoustic models were tested. Fusion of ASR outputs by consensus and with a triggered language model (using a priori knowledge) were investigated. For the sentence spotting task, an algorithm based on distance evaluation between the current ASR hypotheses and the predefine set of keyword patterns was introduced in order to retrieve the correct sentences in spite of the ASR errors. The techniques were assessed on real daily living data collected in a 4-room smart home that was fully equipped with standard tactile commands and with 7 wireless microphones set in the ceiling. Thanks to Driven Decoding Algorithm techniques, a classical ASR system reached 7.9 % WER against 35 % WER in standard configuration and 15 % with MLLR adaptation only. The best keyword pattern classification result obtained in distant speech conditions was 7.5 % CER

Topics: keyword
Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.352.5681
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.