60,233 research outputs found

    Computers that smile: Humor in the interface

    Get PDF
    It is certainly not the case that wen we consider research on the role of human characteristics in the user interface of computers that no attention has been paid to the role of humor. However, when we compare efforts in this area with efforts and experiments that attempt to demonstrate the positive role of general emotion modelling in the user interface, then we must conclude that this attention is still low. As we all know, sometimes the computer is a source of frustration rather than a source of enjoyment. And indeed we see research projects that aim at recognizing a user’s frustration, rather than his enjoyment. However, rather than detecting frustration, and maybe reacting on it in a humorous way, we would like to prevent frustration by making interaction with a computer more natural and more enjoyable. For that reason we are working on multimodal interaction and embodied conversational agents. In the interaction with embodied conversational agents verbal and nonverbal communication are equally important. Multimodal emotion display and detection are among our advanced research issues, and investigations in the role of humor in human-computer interaction is one of them

    Expectation-based user interaction

    Get PDF
    Multimedia and multimodal interfaces reflect the growing technological possibilities of computer-based systems for interaction with the user. The ongoing increase in communication bandwidth and the growing variety of communication channels enable further improvement in the user interface. However, how this increased communication capacity can optimally be exploited is as yet unknown. Since the functionality of these computer-based systems also continues to grow, the increased complexity of interaction procedures and the difficulty of mastering them are prime issues in the design of "easy to use" multimodal user interfaces. In order to appreciate more fully what is involved in self-evident and at the same time efficient interaction between user and system, we will first briefly describe the layered-protocol model of computer-human dialogue as proposed by Taylor (1988a). This conceptual framework emphasizes the relevance of layered feedback for the efficiency of communication. As indicated by Engel & Haakma (1993), in particular early feedback about the system's interpretation of the message part already received (I-feedback) as well as on machine expectations about message elements still to be received (E-feedback) are of relevance for the system's ease of use. Thereafter, as an interesting example of improved human-computer interaction through layered multimodal I- and E-feedback, an experimental trackball device will be described. It provides the user, in addition to the standard visual I-feedback about the current cursor position, with tactile E-feedback about the expected cursor target position. Lastly, our running experimental exploration of the possibilities for automatic cursor-endpoint prediction will be described, this research being of relevance for the further improvement of interaction with the mentioned trackball device with expectation-based force-feedback

    Handbook of Technical Communication

    Get PDF
    International audienceThe handbook "Technical Communication" brings together a variety of topics which range from the role of technical media in human communication to the linguistic, multimodal enhancement of present-day technologies. It covers the area of computer-mediated text, voice and multimedia communication as well as of technical documentation. In doing so, the handbook takes professional and private communication into account. Special emphasis is put on technical communication based on digital technologies and its standardization in system development. In summary, the handbook deals with theoretical issues of technical communication and its practical impact on the development and usage of text and speech technologies

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear
    corecore