28,650 research outputs found

    How to address smart homes with a social robot? A multi-modal corpus of user interactions with an intelligent environment

    Get PDF
    Holthaus P, Leichsenring C, Bernotat J, et al. How to address smart homes with a social robot? A multi-modal corpus of user interactions with an intelligent environment. In: Calzolari N, ed. LREC 2016, Tenth International Conference on Language Resources and Evaluation. [Proceedings]. Paris: European Language Resources Association (ELRA); 2016: 3440-3446.In order to explore intuitive verbal and non-verbal interfaces in smart environments we recorded user interactions with an intelligent apartment. Besides offering various interactive capabilities itself, the apartment is also inhabited by a social robot that is available as a humanoid interface. This paper presents a multi-modal corpus that contains goal-directed actions of naive users in attempts to solve a number of predefined tasks. Alongside audio and video recordings, our data-set consists of large amount of temporally aligned sensory data and system behavior provided by the environment and its interactive components. Non-verbal system responses such as changes in light or display contents, as well as robot and apartment utterances and gestures serve as a rich basis for later in-depth analysis. Manual annotations provide further information about meta data like the current course of study and user behavior including the incorporated modality, all literal utterances, language features, emotional expressions, foci of attention, and addressees

    Towards Multi-Modal Interactions in Virtual Environments: A Case Study

    Get PDF
    We present research on visualization and interaction in a realistic model of an existing theatre. This existing ‘Muziek¬centrum’ offers its visitors information about performances by means of a yearly brochure. In addition, it is possible to get information at an information desk in the theatre (during office hours), to get information by phone (by talking to a human or by using IVR). The database of the theater holds the information that is available at the beginning of the ‘theatre season’. Our aim is to make this information more accessible by using multi-modal accessible multi-media web pages. A more general aim is to do research in the area of web-based services, in particu¬lar interactions in virtual environments

    Student Teaching and Research Laboratory Focusing on Brain-computer Interface Paradigms - A Creative Environment for Computer Science Students -

    Full text link
    This paper presents an applied concept of a brain-computer interface (BCI) student research laboratory (BCI-LAB) at the Life Science Center of TARA, University of Tsukuba, Japan. Several successful case studies of the student projects are reviewed together with the BCI Research Award 2014 winner case. The BCI-LAB design and project-based teaching philosophy is also explained. Future teaching and research directions summarize the review.Comment: 4 pages, 4 figures, accepted for EMBC 2015, IEEE copyrigh

    Towards Simulating Humans in Augmented Multi-party Interaction

    Get PDF
    Human-computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in the European AMI research project

    Caring, sharing widgets: a toolkit of sensitive widgets

    Get PDF
    Although most of us communicate using multiple sensory modalities in our lives, and many of our computers are similarly capable of multi-modal interaction, most human-computer interaction is predominantly in the visual mode. This paper describes a toolkit of widgets that are capable of presenting themselves in multiple modalities, but further are capapble of adapting their presentation to suit the contexts and environments in which they are used. This is of increasing importance as the use of mobile devices becomes ubiquitous

    Reference Resolution in Multi-modal Interaction: Position paper

    Get PDF
    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems and how they are treated for different contexts. No generally applicable solutions are given

    Specification Techniques for Multi-Modal Dialogues in the U-Wish Project

    Get PDF
    In this paper we describe the development of a specification\ud technique for specifying interactive web-based services. We\ud wanted to design a language that can be a means of\ud communication between designers and developers of interactive services, that makes it easier to develop web-based services fitted to the users and that shortens the pathway from design to implementation. The language, still under development, is based on process algebra and can be\ud connected to the results of task analysis. We have been\ud working on the automatic generation of executable prototypes\ud out of the specifications. In this way the specification\ud language can establish a connection between users, design\ud and implementation. A first version of this language is\ud available as well as prototype tools for executing the specifications. Ideas will be given as to how to make the connection between specifications and task analysis
    • …
    corecore