9 research outputs found

    Speech analysis for Ambient Assisted Living : technical and user design of a vocal order system

    No full text
    International audienceEvolution of ICT led to the emergence of smart home. A Smart Home consists in a home equipped with data-processing technology which anticipates the needs of its inhabitant while trying to maintain their comfort and their safety by action on the house and by implementing connections with the outside world. Therefore, smart homes equipped with ambient intelligence technology constitute a promising direction to enable the growing number of elderly to continue to live in their own homes as long as possible. However, the technological solutions requested by this part of the population have to suit their specific needs and capabilities. It is obvious that these Smart Houses tend to be equipped with devices whose interfaces are increasingly complex and become difficult to control by the user. The people the most likely to benefit from these new technologies are the people in loss of autonomy such as the disabled people or the elderly which cognitive deficiencies (Alzheimer). Moreover, these people are the less capable of using the complex interfaces due to their handicap or their lack ICT understanding. Thus, it becomes essential to facilitate the daily life and the access to the whole home automation system through the smart home. The usual tactile interfaces should be supplemented by accessible interfaces, in particular, thanks to a system reactive to the voice ; these interfaces are also useful when the person cannot move easily. Vocal orders will allow the following functionality: - To ensure an assistance by a traditional or vocal order. - To set up a indirect order regulation for a better energy management. - To reinforce the link with the relatives by the integration of interfaces dedicated and adapted to the person in loss of autonomy. - To ensure more safety by detection of distress situations and when someone is breaking in the house. This chapter will describe the different steps which are needed for the conception of an audio ambient system. The first step is related to the acceptability and the objection aspects by the end users and we will report a user evaluation assessing the acceptance and the fear of this new technology. The experience aimed at testing three important aspects of speech interaction: voice command, communication with the outside world, home automation system interrupting a person's activity. The experiment was conducted in a smart home with a voice command using a Wizard of OZ technique and gave information of great interest. The second step is related to a general presentation of the audio sensing technology for ambient assisted living. Different aspect of sound and speech processing will be developed. The applications and challenges will be presented. The third step is related to speech recognition in the home environment. Automatic Speech Recognition systems (ASR) have reached good performances with close talking microphones (e.g., head-set), but the performances decrease significantly as soon as the microphone is moved away from the mouth of the speaker (e.g., when the microphone is set in the ceiling). This deterioration is due to a broad variety of effects including reverberation and presence of undetermined background noise such as TV radio and, devices. This part will present a system of vocal order recognition in distant speech context. This system was evaluated in a dedicated flat thanks to some experiments. This chapter will then conclude with a discussion on the interest of the speech modality concerning the Ambient Assisted Living

    On Distant Speech Recognition for Home Automation

    No full text
    The official version of this draft is available at Springer via http://dx.doi.org/10.1007/978-3-319-16226-3_7International audienceIn the framework of Ambient Assisted Living, home automation may be a solution for helping elderly people living alone at home. This study is part of the Sweet-Home project which aims at developing a new home automation system based on voice command to improve support and well-being of people in loss of autonomy. The goal of the study is vocal order recognition with a focus on two aspects: distance speech recognition and sentence spotting. Several ASR techniques were evaluated on a realistic corpus acquired in a 4-room flat equipped with microphones set in the ceiling. This distant speech French corpus was recorded with 21 speakers who acted scenarios of activities of daily living. Techniques acting at the decoding stage, such as our novel approach called Driven Decoding Algorithm (DDA), gave better speech recognition results than the baseline and other approaches. This solution which uses the two best SNR channels and a priori knowledge (voice commands and distress sentences) has demonstrated an increase in recognition rate without introducing false alarms

    Distant speech recognition for home automation: Preliminary experimental results in a smart home

    Full text link
    International audienceThis paper presents a study that is part of the Sweet-Home project which aims at developing a new home automation system based on voice command. The study focused on two tasks: distant speech recognition and sentence spotting (e.g., recognition of domotic orders). Regarding the first task, different combinations of ASR systems, language and acoustic models were tested. Fusion of ASR outputs by consensus and with a triggered language model (using a priori knowledge) were investigated. For the sentence spotting task, an algorithm based on distance evaluation between the current ASR hypotheses and the predefine set of keyword patterns was introduced in order to retrieve the correct sentences in spite of the ASR errors. The techniques were assessed on real daily living data collected in a 4-room smart home that was fully equipped with standard tactile commands and with 7 wireless microphones set in the ceiling. Thanks to Driven Decoding Algorithm techniques, a classical ASR system reached 7.9% WER against 35% WER in standard configuration and 15% with MLLR adaptation only. The best keyword pattern classification result obtained in distant speech conditions was 7.5% CER

    Adaptive model-driven user interface development systems

    Get PDF
    Adaptive user interfaces (UIs) were introduced to address some of the usability problems that plague many software applications. Model-driven engineering formed the basis for most of the systems targeting the development of such UIs. An overview of these systems is presented and a set of criteria is established to evaluate the strengths and shortcomings of the state-of-the-art, which is categorized under architectures, techniques, and tools. A summary of the evaluation is presented in tables that visually illustrate the fulfillment of each criterion by each system. The evaluation identified several gaps in the existing art and highlighted the areas of promising improvement

    JEP-TALN-RECITAL 2012, Atelier ILADI 2012: Interactions Langagières pour personnes Agées Dans les habitats Intelligents

    No full text
    National audiencePour résoudre le problème du maintien à domicile de la population vieillissante, les solutions retenues par les pays industrialisés s'appuient sur un développement massif des Technologies de l'Information et de la Communication (TIC) au travers de l'Assistance à la Vie Autonome (AVA) ou Ambient Assisted Living (AAL). Un des plus grands défis est de concevoir des habitats intelligents pour la santé qui anticipent les besoins de leurs habitants tout en maintenant leur sécurité et leur confort. Les Technologies du Traitement Automatique du Langage Naturel (TALN) et de la Parole ont un rôle significatif à jouer pour assister quotidiennement les personnes âgées et rendre possible leur participation à la " société de l'information " car elles se trouvent au cœur de la communication humaine. En effet, les technologies de la langue peuvent permettre une interaction naturelle (reconnaissance automatique de la parole, synthèse vocale, dialogue) avec les objets communicants et les maisons intelligentes. Cette interaction ouvre un grand nombre de perspectives notamment dans le domaine de la communication sociale et empathique (perception et génération d'émotions, agents conversationnels), de l'analyse de capacités langagières (accès lexical, paroles pathologiques), de la modélisation et de l'analyse de la production langagière de la personne âgée (modèle acoustique, modèle de langage), de la stimulation cognitive, de la détection de situations de détresse, de l'accès aux documents numériques, etc. Ces dernières années, un nombre croissant d'événements scientifiques ont eu lieu afin de réunir la communauté internationale autour de ces problématiques, nous pouvons citer notamment l'atelier ACL " Speech and Language Processing for Assistive Technologies (SLPAT 2011) " ou l'atelier de PERVASIVE 2012 " Language Technology in Pervasive Computing (LTPC 2012) " qui témoignent de la vitalité de ce domaine pour les technologies de la langue. C'est afin de réunir les chercheurs francophones s'intéressant à l'application des technologies de la langue dans le domaine de l'assistance à la vie autonome et désireux de les promouvoir que l'atelier " Interactions Langagières pour personnes Âgées Dans les habitats Intelligents (ILADI2012) " a été créé pour présenter et discuter des idées, projets et travaux en cours. Cet atelier se situe à l'intersection des thématiques des conférences spécialisées dans les domaines de la gérontechnologie, de l'intelligence artificielle, du traitement automatique de la parole et du langage naturel. Il est ouvert à la présentation de travaux de chercheurs et doctorants portant sur l'un ou plusieurs des thèmes suivants : reconnaissance de la parole en conditions distantes (rehaussement de la parole dans le bruit, séparation de sources, environnement multicapteur) ; compréhension, modélisation ou reconnaissance de la voix âgée ; applications de la parole pour le maintien à domicile (identification du locuteur, reconnaissance de mots-clés / ordre domotiques, synthèse, dialogue) ; reconnaissance des signes avant-coureurs d'une perte de capacité langagière, etc. La première édition de cet atelier s'est tenue en juin 2012 à Grenoble durant la conférence JEP-TALN-RECITAL 2012, avec le soutien des projets ANR Sweet-Home (ANR-2009-VERS- 011) et Cirdo (ANR-2010-TECS-012), ainsi que le support du pôle de compétitivité international MINALOGIC. Cinq soumissions présentant des travaux dans les différents champs cités ont été retenues. Les présentations correspondantes ont été précédées d'une conférence d'Alain Franco, Professeur Universitaire et Praticien Hospitalier au CHU de Nice et Président du CNR-Santé sur les nouveaux paradigmes et technologies pour la santé et l'autonomie. L'atelier c'est terminé par une discussion ouverte sur le rôle des technologies de la langue dans le cadre du maintien à domicile des personnes âgées avec la participation de plusieurs acteurs locaux. Nous remercions chaleureusement les participants à l'atelier et les membres du comité de programme, ainsi que l'ensemble du comité d'organisation de la conférence JEP-TALN- RECITAL 2012, sans lesquels cet évènement n'aurait pu se tenir. Michel Vacher & François Portet, équipe GETALP du LI

    Engineering Adaptive Model-Driven User Interfaces

    No full text
    Software applications that are very large-scale, can encompass hundreds of complex user interfaces (UIs). Such applications are commonly sold as feature-bloated off-the-shelf products to be used by people with variable needs in the required features and layout preferences. Although many UI adaptation approaches were proposed, several gaps and limitations including: extensibility and integration in legacy systems, still need to be addressed in the state-of-the-art adaptive UI development systems. This paper presents Role-Based UI Simplification (RBUIS) as a mechanism for increasing usability through adaptive behaviour by providing end-users with a minimal feature-set and an optimal layout, based on the context-of- use. RBUIS uses an interpreted runtime model-driven approach based on the Cedar Architecture, and is supported by the integrated development environment (IDE), Cedar Studio. RBUIS was evaluated by integrating it into OFBiz, an open-source ERP system. The integration method was assessed and measured by establishing and applying technical metrics. Afterwards, a usability study was carried out to evaluate whether UIs simplified with RBUIS show an improvement over their initial counterparts. This study leveraged questionnaires, checking task completion times and output quality, and eye-tracking. The results showed that UIs simplified with RBUIS significantly improve end-user efficiency, effectiveness, and perceived usability

    Being Old Doesn't Mean Acting Old: How Older Users Interact with Spoken Dialogue System.

    Get PDF
    Most studies on adapting voice interfaces to older users work top-down by comparing the interaction behavior of older and younger users. In contrast, we present a bottom-up approach. A statistical cluster analysis of 447 appointment scheduling dialogs between 50 older and younger users and 9 simulated spoken dialog systems revealed two main user groups, a “social” group and a “factual” group. “Factual” users adapted quickly to the systems and interacted efficiently with them. “Social” users, on the other hand, were more likely to treat the system like a human, and did not adapt their interaction style. While almost all “social” users were older, over a third of all older users belonged in the “factual” group. Cognitive abilities and gender did not predict group membership. We conclude that spoken dialog systems should adapt to users based on observed behavior, not on age
    corecore