3,577 research outputs found

    Многомодальные ассистивные системы для интеллектуального жилого пространства.

    Get PDF
    The paper proposes a survey of assistive smart spaces and ambient assisted living environments. Also design of a multimodal assistive system for a smart living environment is presented. The system consists of two software coplexes. The first one provides video signal processing and surveillance for detecting and tracking a user as well as analysis of his/her activity. The second software complex provides audio signal processing for automatic recognition of speech messages and non-speech acoustic events. The developed automatic speech recognition system is multilingual one and is able to recognize words both in English and in Russian. At the experiments, 2811 wave files with speech commands and simulated acoustic events have been recorded in total. Recognition rate for speech commands and non-speech acoustic events was 96.5% and 93.8%, respectively.В статье представлен обзор систем, применяемых для ассистивного интеллектуального пространства. Также описывается разработанная многомодальная ассистивная система для интеллектуального жилого пространства, которая состоит из двух комплексов средств. Первый комплекс выполняет обработку видеопотоков для определения положения пользователя и слежения за его перемещением, а также анализа его действий. Ко второму комплексу относится система обработки аудиопотоков, предназначенная для автоматического распознавания речевых команд и акустических событий. Разработанная система автоматического распознавания речи многоязычна и позволяет распознавать слова, произнесенные на английском или русском. В процессе проведения экспериментов было записано 2811 аудиофайлов, содержащих речь и акустические события, средняя точность распознавания составила 96,5% и 93,8% соответственно

    Comparing CNN and Human Crafted Features for Human Activity Recognition

    Get PDF
    Deep learning techniques such as Convolutional Neural Networks (CNNs) have shown good results in activity recognition. One of the advantages of using these methods resides in their ability to generate features automatically. This ability greatly simplifies the task of feature extraction that usually requires domain specific knowledge, especially when using big data where data driven approaches can lead to anti-patterns. Despite the advantage of this approach, very little work has been undertaken on analyzing the quality of extracted features, and more specifically on how model architecture and parameters affect the ability of those features to separate activity classes in the final feature space. This work focuses on identifying the optimal parameters for recognition of simple activities applying this approach on both signals from inertial and audio sensors. The paper provides the following contributions: (i) a comparison of automatically extracted CNN features with gold standard Human Crafted Features (HCF) is given, (ii) a comprehensive analysis on how architecture and model parameters affect separation of target classes in the feature space. Results are evaluated using publicly available datasets. In particular, we achieved a 93.38% F-Score on the UCI-HAR dataset, using 1D CNNs with 3 convolutional layers and 32 kernel size, and a 90.5% F-Score on the DCASE 2017 development dataset, simplified for three classes (indoor, outdoor and vehicle), using 2D CNNs with 2 convolutional layers and a 2x2 kernel size

    Speech analysis for Ambient Assisted Living : technical and user design of a vocal order system

    No full text
    International audienceEvolution of ICT led to the emergence of smart home. A Smart Home consists in a home equipped with data-processing technology which anticipates the needs of its inhabitant while trying to maintain their comfort and their safety by action on the house and by implementing connections with the outside world. Therefore, smart homes equipped with ambient intelligence technology constitute a promising direction to enable the growing number of elderly to continue to live in their own homes as long as possible. However, the technological solutions requested by this part of the population have to suit their specific needs and capabilities. It is obvious that these Smart Houses tend to be equipped with devices whose interfaces are increasingly complex and become difficult to control by the user. The people the most likely to benefit from these new technologies are the people in loss of autonomy such as the disabled people or the elderly which cognitive deficiencies (Alzheimer). Moreover, these people are the less capable of using the complex interfaces due to their handicap or their lack ICT understanding. Thus, it becomes essential to facilitate the daily life and the access to the whole home automation system through the smart home. The usual tactile interfaces should be supplemented by accessible interfaces, in particular, thanks to a system reactive to the voice ; these interfaces are also useful when the person cannot move easily. Vocal orders will allow the following functionality: - To ensure an assistance by a traditional or vocal order. - To set up a indirect order regulation for a better energy management. - To reinforce the link with the relatives by the integration of interfaces dedicated and adapted to the person in loss of autonomy. - To ensure more safety by detection of distress situations and when someone is breaking in the house. This chapter will describe the different steps which are needed for the conception of an audio ambient system. The first step is related to the acceptability and the objection aspects by the end users and we will report a user evaluation assessing the acceptance and the fear of this new technology. The experience aimed at testing three important aspects of speech interaction: voice command, communication with the outside world, home automation system interrupting a person's activity. The experiment was conducted in a smart home with a voice command using a Wizard of OZ technique and gave information of great interest. The second step is related to a general presentation of the audio sensing technology for ambient assisted living. Different aspect of sound and speech processing will be developed. The applications and challenges will be presented. The third step is related to speech recognition in the home environment. Automatic Speech Recognition systems (ASR) have reached good performances with close talking microphones (e.g., head-set), but the performances decrease significantly as soon as the microphone is moved away from the mouth of the speaker (e.g., when the microphone is set in the ceiling). This deterioration is due to a broad variety of effects including reverberation and presence of undetermined background noise such as TV radio and, devices. This part will present a system of vocal order recognition in distant speech context. This system was evaluated in a dedicated flat thanks to some experiments. This chapter will then conclude with a discussion on the interest of the speech modality concerning the Ambient Assisted Living

    Development of Automatic Speech Recognition Techniques for Elderly Home Support: Applications and Challenges

    Get PDF
    International audienceVocal command may have considerable advantages in terms of usability in the AAL domain. However, efficient audio analysis in smart home environment is a challenging task in large part because of bad speech recognition results in the case of elderly people. Dedicated speech corpora were recorded and employed to adapted generic speech recog-nizers to this type of population. Evaluation results of a first experiment allowed to draw conclusions about the distress call detection. A second experiments involved participants who played fall scenarios in a realistic smart home, 67% of the distress calls were detected online. These results show the difficulty of the task and serve as basis to discuss the stakes and the challenges of this promising technology for AAL

    CirdoX: an On/Off-line Multisource Speech and Sound Analysis Software

    No full text
    International audienceVocal User Interfaces in domestic environments recently gained interest in the speech processing community. This interest is due to the opportunity of using it in the framework of Ambient Assisted Living both for home automation (vocal command) and for call for help in case of distress situations, i.e. after a fall. CIRDOX, which is a modular software, is able to analyse online the audio environment in a home, to extract the uttered sentences and then to process them thanks to an ASR module. Moreover, this system perfoms non-speech audio event classification; in this case, specific models must be trained. The software is designed to be modular and to process on-line the audio multichannel stream. Some exemples of studies in which CIRDOX was involved are described. They were operated in real environment, namely a Living lab environment. Keywords: audio and speech processing, natural language and multimodal interactions, Ambient Assisted Living (AAL)

    Speech and Speaker Recognition for Home Automation: Preliminary Results

    No full text
    International audienceIn voice controlled multi-room smart homes ASR and speaker identification systems face distance speech conditionswhich have a significant impact on performance. Regarding voice command recognition, this paper presents an approach whichselects dynamically the best channel and adapts models to the environmental conditions. The method has been tested on datarecorded with 11 elderly and visually impaired participants in a real smart home. The voice command recognition error ratewas 3.2% in off-line condition and of 13.2% in online condition. For speaker identification, the performances were below veryspeaker dependant. However, we show a high correlation between performance and training size. The main difficulty was the tooshort utterance duration in comparison to state of the art studies. Moreover, speaker identification performance depends on the sizeof the adapting corpus and then users must record enough data before using the system

    Recognition of Distress Calls in Distant Speech Setting: a Preliminary Experiment in a Smart Home

    Get PDF
    International audienceThis paper presents a system to recognize distress speech in the home of seniors to provide reassurance and assistance. The system is aiming at being integrated into a larger system for Ambient Assisted Living (AAL) using only one microphone with a fix position in a non-intimate room. The paper presents the details of the automatic speech recognition system which must work under distant speech condition and with expressive speech. Moreover, privacy is ensured by running the decoding on-site and not on a remote server. Furthermore the system was biased to recognize only set of sentences defined after a user study. The system has been evaluated in a smart space reproducing a typical living room where 17 participants played scenarios including falls during which they uttered distress calls. The results showed a promising error rate of 29% while emphasizing the challenges of the task
    corecore