17,959 research outputs found

    Do Embodied Conversational Agents Know When to Smile?

    Get PDF
    We survey the role of humor in particular domains of human-to-human interaction with the aim of seeing whether it is useful for embodied conversational agents to integrate humor capabilities in their models of intelligence, emotions and interaction (verbal and nonverbal) Therefore we first look at the current state of the art of research in embodied conversational agents, affective computing and verbal and nonverbal interaction. We adhere to the 'Computers Are Social Actors' paradigm to assume that human conversational partners of embodied conversational agents assign human properties to these agents, including humor appreciation

    Computers that smile: Humor in the interface

    Get PDF
    It is certainly not the case that wen we consider research on the role of human characteristics in the user interface of computers that no attention has been paid to the role of humor. However, when we compare efforts in this area with efforts and experiments that attempt to demonstrate the positive role of general emotion modelling in the user interface, then we must conclude that this attention is still low. As we all know, sometimes the computer is a source of frustration rather than a source of enjoyment. And indeed we see research projects that aim at recognizing a user’s frustration, rather than his enjoyment. However, rather than detecting frustration, and maybe reacting on it in a humorous way, we would like to prevent frustration by making interaction with a computer more natural and more enjoyable. For that reason we are working on multimodal interaction and embodied conversational agents. In the interaction with embodied conversational agents verbal and nonverbal communication are equally important. Multimodal emotion display and detection are among our advanced research issues, and investigations in the role of humor in human-computer interaction is one of them

    Measuring glucose content in the aqueous humor

    Get PDF
    Many diabetics must measure their blood glucose levels regularly to maintain good health. In principle, one way of measuring the glucose concentration in the human body would be by measuring optically the glucose content of the aqueous humor in the eye. Lein Applied Diagnostics wish to assess whether this is feasible by a linear confocal scan with an LED source, or by supplementing such a system with other measurements

    Punny Captions: Witty Wordplay in Image Descriptions

    Full text link
    Wit is a form of rich interaction that is often grounded in a specific situation (e.g., a comment in response to an event). In this work, we attempt to build computational models that can produce witty descriptions for a given image. Inspired by a cognitive account of humor appreciation, we employ linguistic wordplay, specifically puns, in image descriptions. We develop two approaches which involve retrieving witty descriptions for a given image from a large corpus of sentences, or generating them via an encoder-decoder neural network architecture. We compare our approach against meaningful baseline approaches via human studies and show substantial improvements. We find that when a human is subject to similar constraints as the model regarding word usage and style, people vote the image descriptions generated by our model to be slightly wittier than human-written witty descriptions. Unsurprisingly, humans are almost always wittier than the model when they are free to choose the vocabulary, style, etc.Comment: NAACL 2018 (11 pages

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications

    A Link Loss Model for the On-body Propagation Channel for Binaural Hearing Aids

    Full text link
    Binaural hearing aids communicate with each other through a wireless link for synchronization. A propagation model is needed to estimate the ear-to-ear link loss for such binaural hearing aids. The link loss is a critical parameter in a link budget to decide the sensitivity of the transceiver. In this paper, we have presented a model for the deterministic component of the ear-to-ear link loss. The model takes into account the dominant paths having most of the power of the creeping wave from the transceiver in one ear to the transceiver in other ear and the effect of the protruding part of the outer ear called pinna. Simulations are done to validate the model using in-the-ear (ITE) placement of antennas at 2.45 GHz on two heterogeneous phantoms of different age-group and body size. The model agrees with the simulations. The ear-to-ear link loss between the antennas for the binaural hearing aids in the homogeneous SAM phantom is compared with a heterogeneous phantom. It is found that the absence of the pinna and the lossless shell in the SAM phantom underestimate the link loss. This is verified by the measurements on a phantom where we have included the pinnas fabricated by 3D-printing
    • …
    corecore