Skip to main content
Article thumbnail
Location of Repository

2011 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data

By Pedro Chahuara, François Portet and Michel Vacher

Abstract

Abstract—In this paper, a new method to locate a person using multimodal non-visual sensors and microphones in a pervasive environment is presented. The information extracted from sensors is combined using a two-level dynamic network to obtain the location hypotheses. This method was tested within two smart homes using data from experiments involving about 25 participants. The preliminary results show that an accuracy of 90 % can be reached using several uncertain sources. The use of implicit localisation sources, such as speech recognition, mainly used in this project for voice command, can improve performances in many cases. I

Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.352.5524
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.