29 research outputs found

    On Distant Speech Recognition for Home Automation

    No full text
    The official version of this draft is available at Springer via http://dx.doi.org/10.1007/978-3-319-16226-3_7International audienceIn the framework of Ambient Assisted Living, home automation may be a solution for helping elderly people living alone at home. This study is part of the Sweet-Home project which aims at developing a new home automation system based on voice command to improve support and well-being of people in loss of autonomy. The goal of the study is vocal order recognition with a focus on two aspects: distance speech recognition and sentence spotting. Several ASR techniques were evaluated on a realistic corpus acquired in a 4-room flat equipped with microphones set in the ceiling. This distant speech French corpus was recorded with 21 speakers who acted scenarios of activities of daily living. Techniques acting at the decoding stage, such as our novel approach called Driven Decoding Algorithm (DDA), gave better speech recognition results than the baseline and other approaches. This solution which uses the two best SNR channels and a priori knowledge (voice commands and distress sentences) has demonstrated an increase in recognition rate without introducing false alarms

    Making emergency calls more accessible to older adults through a hands-free speech interface in the house

    No full text
    International audienceWearable personable emergency response (PER) systems are the mainstream solution for allowing frail and isolated individuals to call for help in an emergency. However, these devices are not well adapted to all users and are often not worn all the time, meaning they are not available when needed. This paper presents a Voice User Interface system for emergency call recognition. The interface is designed to permit hands-free interaction using natural language. Crucially, this allows a call for help to be registered without necessitating physical proximity to the system.The system is based on an ASR engine and is tested on a corpus collected to simulate realistic situations. The corpus contains French speech from 4 older adults and 13 younger people wearing an old-age simulator to hamper their mobility, vision and hearing. On-line evaluation of the preliminary system showed an emergency call error rate of 27%. Subsequent off-line experimentation improved the results (call error rate 24%), demonstrating that emergency call recognition in the home is achievable. Another contribution of this work is the corpus, which is made available for research with the hope that it will facilitate related research and quicker development of robust methods for automatic emergency call recognition in the home

    Distant speech processing for smart home: comparison of ASR approaches in scattered microphone network for voice command

    No full text
    International audienceVoice command in multi-room smart homes for assisting people in loss of autonomy in their daily activities faces several challenges, one of them being the distant condition which impacts ASR performance. This paper presents an overview of multiple techniques for fusion of multi-source audio (pre, middle, post fusion) for automatic speech recognition for in-home voice command. The robustness of the models of speech is obtained by adaptation to the environment and to the task. Experiments are based on several publicly available realistic datasets with participants enacting activities of daily life. The corpora were recorded in natural condition, meaning background noise is sporadic, so there is no extensive background noise in the data. The smart home is equipped with one or two microphones in each room, the distance between them being larger than 1 meter. An evaluation of the most suited techniques improves voice command recognition at the decoding level, by using multiple sources and model adaptation. Although Word Error Rate (WER) is between 26% and 40%, Domotic Error Rate (identical to the WER, but at the level of the voice command) is less than 5.8% for deep neural network models , the method using Feature space Maximum Likelihood Linear Regression (fMLLR) with speaker adaptation training and Subspace Gaussian Mixture Model (SGMM) exhibits comparable results
    corecore