2,884 research outputs found

    Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios

    Get PDF
    Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation

    Multimodal agent interfaces and system architectures for health and fitness companions

    Get PDF
    Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. In particular, we focus on different forms of multimodality and system architectures for such interfaces

    Towards the Use of Dialog Systems to Facilitate Inclusive Education

    Get PDF
    Continuous advances in the development of information technologies have currently led to the possibility of accessing learning contents from anywhere, at anytime, and almost instantaneously. However, accessibility is not always the main objective in the design of educative applications, specifically to facilitate their adoption by disabled people. Different technologies have recently emerged to foster the accessibility of computers and new mobile devices, favoring a more natural communication between the student and the developed educative systems. This chapter describes innovative uses of multimodal dialog systems in education, with special emphasis in the advantages that they provide for creating inclusive applications and learning activities

    Emerging spaces for language learning: AI bots, ambient intelligence, and the metaverse

    Get PDF
    Looking at human communication from the perspective of semiotics extends our view beyond verbal language to consider other sign systems and meaning-making resources. Those include gestures, body language, images, and sounds. From this perspective, the communicative process expands from individual mental processes of verbalizing to include features of the environment, the place and space in which the communication occurs. It may be—and it is increasingly the case today—that language is mediated through digital networks. Online communication has become multimodal in virtually all platforms. At the same time, mobile devices have become indispensable digital companions, extending our perceptive and cognitive abilities. Advances in artificial intelligence are enabling tools that have considerable potential for language learning, as well as creating more complexity in the relationship between humans and the material world. In this column, we will be looking at changing perspectives on the role of place and space in language learning, as mobile, embedded, virtual, and reality-augmenting technologies play an ever-increasing role in our lives. Understanding that dynamic is aided by theories and frameworks such as 4E cognition and sociomaterialism, which posit closer connections between human cognition/language and the world around us

    Hacia una educación inclusiva y personalizada mediante el uso de los sistemas de diálogo multimodal

    Get PDF
    Los continuos avances en el desarrollo de tecnologías de la información han dado lugar actualmente a la posibilidad de acceder a los contenidos educativos desde cualquier lugar, en cualquier momento y de forma casi instantánea. Sin embargo, la accesibilidad no es siempre considerada como criterio principal en el diseño de aplicaciones educativas, especialmente para facilitar su utilización por parte de personas con discapacidad. Diferentes tecnologías han surgido recientemente para fomentar la accesibilidad a las nuevas tecnologías y dispositivos móviles, favoreciendo una comunicación más natural con los sistemas educativos. En este artículo se describe el uso innovador de los sistemas de diálogo multimodales en el campo de la educación, con un especial énfasis en la descripción de las ventajas que ofrecen para la creación de aplicaciones educativas inclusivas y adaptadas a la evolución de los estudiantes.Continuous advances in the development of information technologies have currently led to the possibility of accessing learning contents from anywhere, at anytime and almost instantaneously. However, accessibility is not always the main objective in the design of educative applications, specifically to facilitate their adoption by disabled people. Different technologies have recently emerged to foster the accessibility of computers and new mobile devices favouring a more natural communication between the student and the developed educative systems. This paper describes innovative uses of multimodal dialog systems in education, with special emphasis in the advantages that they provide for creating inclusive applications and adapted to the students specific evolution.Trabajo parcialmente financiado por los proyectos MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485) y TRA2010-20225-C03-01.Publicad

    On Distant Speech Recognition for Home Automation

    No full text
    The official version of this draft is available at Springer via http://dx.doi.org/10.1007/978-3-319-16226-3_7International audienceIn the framework of Ambient Assisted Living, home automation may be a solution for helping elderly people living alone at home. This study is part of the Sweet-Home project which aims at developing a new home automation system based on voice command to improve support and well-being of people in loss of autonomy. The goal of the study is vocal order recognition with a focus on two aspects: distance speech recognition and sentence spotting. Several ASR techniques were evaluated on a realistic corpus acquired in a 4-room flat equipped with microphones set in the ceiling. This distant speech French corpus was recorded with 21 speakers who acted scenarios of activities of daily living. Techniques acting at the decoding stage, such as our novel approach called Driven Decoding Algorithm (DDA), gave better speech recognition results than the baseline and other approaches. This solution which uses the two best SNR channels and a priori knowledge (voice commands and distress sentences) has demonstrated an increase in recognition rate without introducing false alarms

    Distant speech recognition for home automation: Preliminary experimental results in a smart home

    Full text link
    International audienceThis paper presents a study that is part of the Sweet-Home project which aims at developing a new home automation system based on voice command. The study focused on two tasks: distant speech recognition and sentence spotting (e.g., recognition of domotic orders). Regarding the first task, different combinations of ASR systems, language and acoustic models were tested. Fusion of ASR outputs by consensus and with a triggered language model (using a priori knowledge) were investigated. For the sentence spotting task, an algorithm based on distance evaluation between the current ASR hypotheses and the predefine set of keyword patterns was introduced in order to retrieve the correct sentences in spite of the ASR errors. The techniques were assessed on real daily living data collected in a 4-room smart home that was fully equipped with standard tactile commands and with 7 wireless microphones set in the ceiling. Thanks to Driven Decoding Algorithm techniques, a classical ASR system reached 7.9% WER against 35% WER in standard configuration and 15% with MLLR adaptation only. The best keyword pattern classification result obtained in distant speech conditions was 7.5% CER

    Socially Assistive Robots for Older Adults and People with Autism: An Overview

    Get PDF
    Over one billion people in the world suffer from some form of disability. Nevertheless, according to the World Health Organization, people with disabilities are particularly vulnerable to deficiencies in services, such as health care, rehabilitation, support, and assistance. In this sense, recent technological developments can mitigate these deficiencies, offering less-expensive assistive systems to meet users’ needs. This paper reviews and summarizes the research efforts toward the development of these kinds of systems, focusing on two social groups: older adults and children with autism.This research was funded by the Spanish Government TIN2016-76515-R grant for the COMBAHO project, supported with Feder funds. It has also been supported by Spanish grants for PhD studies ACIF/2017/243 and FPU16/00887
    corecore