342 research outputs found

    The senior companion multiagent dialogue system

    Get PDF
    This article presents a multi-agent dialogue system. We show how a collection of relatively simple agents is able to treat complex dialogue phenomena and deal successfully with different deployment configurations. We analyze our system regarding robustness and scalability. We show that it degrades gracefully under different failures and that the architecture allows one to easily add new modalities was well as porting the system to different applications and platforms.peer-reviewe

    The senior companion : a semantic web dialogue system

    Get PDF
    This work was funded by Companions[3], European Commission Sixth Framework Programme Information Society Technologies Integrated Project IST-34434.7The Senior Companion (SC) is a fully implemented Windows application intended for intermittent use by one user only (a senior citizen) over potentially many years. The thinking behind the SC is to make a device that will give its owner comfort, company, entertainment, and some practical functions. The SC will typically be installed at home, either as an application on a personal computer, or on a dedicated device (like a Chumby) or an intelligent coffee table (like Microsoft's Surface). By means of multimodal input and output, and a graphical interface, the SC provides its 'owner' with different functionalities, which currently include: ‱ conversing with the user about his personal photos ‱ learning about the user, user's family, and life history ‱ telling the user jokes ‱ reading the news (via RSS feed from the internet)peer-reviewe

    Information extraction tools and methods for understanding dialogue in a companion

    Get PDF
    The authors' research was sponsored by the European Commission under EC grant IST-FP6-034434 (Companions).This paper discusses how Information Extraction is used to understand and manage Dialogue in the EU-funded Companions project. This will be discussed with respect to the Senior Companion, one of two applications under development in the EU-funded Companions project. Over the last few years, research in human-computer dialogue systems has increased and much attention has focused on applying learning methods to improving a key part of any dialogue system, namely the dialogue manager. Since the dialogue manager in all dialogue systems relies heavily on the quality of the semantic interpretation of the user’s utterance, our research in the Companions project, focuses on how to improve the semantic interpretation and combine it with knowledge from the Knowledge Base to increase the performance of the Dialogue Manager. Traditionally the semantic interpretation of a user utterance is handled by a natural language understanding module which embodies a variety of natural language processing techniques, from sentence splitting, to full parsing. In this paper we discuss the use of a variety of NLU processes and in particular Information Extraction as a key part of the NLU module in order to improve performance of the dialogue manager and hence the overall dialogue system.peer-reviewe

    Creating Interaction Scenarios With a New Graphical User Interface

    Full text link
    The field of human-centered computing has known a major progress these past few years. It is admitted that this field is multidisciplinary and that the human is the core of the system. It shows two matters of concern: multidisciplinary and human. The first one reveals that each discipline plays an important role in the global research and that the collaboration between everyone is needed. The second one explains that a growing number of researches aims at making the human commitment degree increase by giving him/her a decisive role in the human-machine interaction. This paper focuses on these both concerns and presents MICE (Machines Interaction Control in their Environment) which is a system where the human is the one who makes the decisions to manage the interaction with the machines. In an ambient context, the human can decide of objects actions by creating interaction scenarios with a new visual programming language: scenL.Comment: 5th International Workshop on Intelligent Interfaces for Human-Computer Interaction, Palerme : Italy (2012

    Artificial Companions with Personality and Social Role

    No full text
    Subtitle: "Expectations from Users on the Design of Groups of Companions"International audienceRobots and virtual characters are becoming increasingly used in our everyday life. Yet, they are still far from being able to maintain long-term social relationships with users. It also remains unclear what future users will expect from these so-called "artificial companions" in terms of social roles and personality. These questions are of importance because users will be surrounded with multiple artificial companions. These issues of social roles and personality among a group of companions are sledom tackled in user studies. In this paper, we describe a study in which 94 participants reported that social roles and personalities they would expect from groups of companions. We explain how the resulsts give insights for the design of future groups of companions endowed with social intelligence

    An architecture for emotional facial expressions as social signals

    Get PDF

    Cognitive assisted living ambient system: a survey

    Get PDF
    The demographic change towards an aging population is creating a significant impact and introducing drastic challenges to our society. We therefore need to find ways to assist older people to stay independently and prevent social isolation of these population. Information and Communication Technologies (ICT) provide various solutions to help older adults to improve their quality of life, stay healthier, and live independently for a time. Ambient Assisted Living (AAL) is a field to investigate innovative technologies to provide assistance as well as healthcare and rehabilitation to impaired seniors. The paper provides a review of research background and technologies of AAL

    A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

    Full text link
    Gestures that accompany speech are an essential part of natural and efficient embodied human communication. The automatic generation of such co-speech gestures is a long-standing problem in computer animation and is considered an enabling technology in film, games, virtual social spaces, and for interaction with social robots. The problem is made challenging by the idiosyncratic and non-periodic nature of human co-speech gesture motion, and by the great diversity of communicative functions that gestures encompass. Gesture generation has seen surging interest recently, owing to the emergence of more and larger datasets of human gesture motion, combined with strides in deep-learning-based generative models, that benefit from the growing availability of data. This review article summarizes co-speech gesture generation research, with a particular focus on deep generative models. First, we articulate the theory describing human gesticulation and how it complements speech. Next, we briefly discuss rule-based and classical statistical gesture synthesis, before delving into deep learning approaches. We employ the choice of input modalities as an organizing principle, examining systems that generate gestures from audio, text, and non-linguistic input. We also chronicle the evolution of the related training data sets in terms of size, diversity, motion quality, and collection method. Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications. We highlight recent approaches to tackling the various key challenges, as well as the limitations of these approaches, and point toward areas of future development.Comment: Accepted for EUROGRAPHICS 202
    • 

    corecore