8,560 research outputs found

    Multimodal agent interfaces and system architectures for health and fitness companions

    Get PDF
    Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. In particular, we focus on different forms of multimodality and system architectures for such interfaces

    Using affective avatars and rich multimedia content for education of children with autism

    Get PDF
    Autism is a communication disorder that mandates early and continuous educational interventions on various levels like the everyday social, communication and reasoning skills. Computer-aided education has recently been considered as a likely intervention method for such cases, and therefore different systems have been proposed and developed worldwide. In more recent years, affective computing applications for the aforementioned interventions have also been proposed to shed light on this problem. In this paper, we examine the technological and educational needs of affective interventions for autistic persons. Enabling affective technologies are visited and a number of possible exploitation scenarios are illustrated. Emphasis is placed in covering the continuous and long term needs of autistic persons by unobtrusive and ubiquitous technologies with the engagement of an affective speaking avatar. A personalised prototype system facilitating these scenarios is described. In addition the feedback from educators for autistic persons is provided for the system in terms of its usefulness, efficiency and the envisaged reaction of the autistic persons, collected by means of an anonymous questionnaire. Results illustrate the clear potential of this effort in facilitating a very promising autism intervention

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Generating expressive speech for storytelling applications

    Get PDF
    Work on expressive speech synthesis has long focused on the expression of basic emotions. In recent years, however, interest in other expressive styles has been increasing. The research presented in this paper aims at the generation of a storytelling speaking style, which is suitable for storytelling applications and more in general, for applications aimed at children. Based on an analysis of human storytellers' speech, we designed and implemented a set of prosodic rules for converting "neutral" speech, as produced by a text-to-speech system, into storytelling speech. An evaluation of our storytelling speech generation system showed encouraging results

    Incorporating android conversational agents in m-learning apps

    Get PDF
    Smart Mobile Devices Have Fostered New Learning Scenarios That Demand Sophisticated Interfaces. Multimodal Conversational Agents Have Became A Strong Alternative To Develop Human-Machine Interfaces That Provide A More Engaging And Human-Like Relationship Between Students And The System. The Main Developers Of Operating Systems For Such Devices Have Provided Application Programming Interfaces For Developers To Implement Their Own Applications, Including Different Solutions For Developing Graphical Interfaces, Sensor Control And Voice Interaction. Despite The Usefulness Of Such Resources, There Are No Strategies Defined For Coupling The Multimodal Interface With The Possibilities That These Devices Offer To Enhance Mobile Educative Apps With Intelligent Communicative Capabilities And Adaptation To The User Needs. In This Paper, We Present A Practical M-Learning Application That Integrates Features Of Android Application Programming Interfaces On A Modular Architecture That Emphasizes Interaction Management And Context-Awareness To Foster User-Adaptively, Robustness And Maintainability.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485
    corecore