3,367 research outputs found

    Advanced Content and Interface Personalization through Conversational Behavior and Affective Embodied Conversational Agents

    Get PDF
    Conversation is becoming one of the key interaction modes in HMI. As a result, the conversational agents (CAs) have become an important tool in various everyday scenarios. From Apple and Microsoft to Amazon, Google, and Facebook, all have adapted their own variations of CAs. The CAs range from chatbots and 2D, carton-like implementations of talking heads to fully articulated embodied conversational agents performing interaction in various concepts. Recent studies in the field of face-to-face conversation show that the most natural way to implement interaction is through synchronized verbal and co-verbal signals (gestures and expressions). Namely, co-verbal behavior represents a major source of discourse cohesion. It regulates communicative relationships and may support or even replace verbal counterparts. It effectively retains semantics of the information and gives a certain degree of clarity in the discourse. In this chapter, we will represent a model of generation and realization of more natural machine-generated output

    Narrative Practice and the Transformation of Interview Subjectivity

    Get PDF

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Facework and multiple selves in apologetic metapragmatic comments in Japanese

    Get PDF

    Social Interactions in Immersive Virtual Environments: People, Agents, and Avatars

    Get PDF
    Immersive virtual environments (IVEs) have received increased popularity with applications in many fields. IVEs aim to approximate real environments, and to make users react similarly to how they would in everyday life. An important use case is the users-virtual characters (VCs) interaction. We interact with other people every day, hence we expect others to appropriately act and behave, verbally and non-verbally (i.e., pitch, proximity, gaze, turn-taking). These expectations also apply to interactions with VCs in IVEs, and this thesis tackles some of these aspects. We present three projects that inform the area of social interactions with a VC in IVEs, focusing on non-verbal behaviours. In our first study on interactions between people, we collaborated with the Social Neuroscience group at the Institute of Cognitive Neuroscience from UCL on a dyad multi-modal interaction. This aims to understand the conversation dynamics, focusing on gaze and turn-taking. The results show that people have a higher frequency of gaze change (from averted to direct and vice versa) when they are being looked at compared to when they are not. When they are not being looked at, they are also directing their gaze to their partners more compared to when they are being looked at. Another contribution of this work is the automated method of annotating speech and gaze data. Next, we consider agents’ higher-level non-verbal behaviours, covering social attitudes. We present a pipeline to collect data and train a machine learning (ML) model that detects social attitudes in a user-VC interaction. Here we collaborated with two game studios: Dream Reality Interaction and Maze Theory. We present a case study for the ML pipeline on social engagement recognition for the Peaky Blinders narrative VR game from Maze Theory studio. We use a reinforcement learning algorithm with imitation learning rewards and a temporal memory element. The results show that the model trained with raw data does not generalise and performs worse (60% accuracy) than the one trained with socially meaningful data (83% accuracy). In IVEs, people embody avatars and their appearance can impact social interactions. In collaboration with Microsoft Research, we report a longitudinal study in mixed-reality on avatar appearance in real-work meetings between co-workers comparing personalised full-body realistic and cartoon avatars. The results imply that when participants use realistic avatars first, they may have higher expectations and they perceive their colleagues’ emotional states with less accuracy. Participants may also become more accustomed to cartoon avatars as time passes and the overall use of avatars may lead to less accurately perceiving negative emotions. The work presented here contributes towards the field of detecting and generating nonverbal cues for VCs in IVEs. These are also important building blocks for creating autonomous agents for IVEs. Additionally, this work contributes to the games and work industry fields through an immersive ML pipeline for detecting social attitudes and through insights into using different avatar styles over time in real-world meetings

    Designing talk in social networks: What Facebook teaches about conversation

    Get PDF
    The easy accessibility, ubiquity, and plurilingualism of popular SNSs such as Facebook have inspired many scholars and practitioners of second language teaching and learning to integrate networked forms of communication into educational contexts such as language classrooms and study abroad programs (e.g., Blattner & Fiori, 2011; Lamy & Zourou, 2013; Mills, 2011; Reinhardt & Ryu, 2013; Reinhardt & Zander, 2011). At the same time, the complex and dynamic patterns of interaction that emerge in these spaces quickly push back upon standard ways of describing conversational genres and communicative competence (Kern, 2014; Lotherington & Ronda, 2014). Drawing from an ecological interactional analysis (Goffman, 1964, 1981a, 1981b, 1986; Kramsch & Whiteside, 2008) of the Facebook communications of three German-speaking academics whose social and professional lives are largely led in English, the authors consider the kinds of symbolic maneuvers required to participate in the translingual conversational flows of SNS-mediated communication. Based on this analysis, this article argues that texts generated through SNS-mediated communication can provide classroom opportunities for critical, stylistically sensitive reflection on the nature of talk in line with multiliteracies approaches

    New Alterities and Emerging Cultures of Social Interaction

    Get PDF
    Globalization has generated increased societal heterogeneity and awakened interest of a new kind in social cohesion and integration. But globalization is not the only contemporary process to give rise to societal hybridization. Two other such processes - much less attended to in the theoretical debate but no less problematic as regards social integration - are societal ageing and robotization. Drawing on statistical estimates, this paper begins by assessing the relevance of these new processes of hybridization. The predictions in question indicate that in the near future, everyday interaction, not just with cultural strangers and 'intelligent' machines, but also with people suffering from dementia, will be an omnipresent phenomenon, confronting our societies with types and degrees of alterity never before encountered. Whereas contact with cultural strangers is to some extent familiar (though not yet taken as standard), interaction with intelligent technological devices and dementia sufferers represent new forms of alterity for which most societies have not yet established routines of conduct. This paper gives a detailed account of a number of empirical studies showing how new forms of hybrid interaction and cooperation evolve out of repeated contact with each of the three alterities. With this groundwork in place, the paper then attempts to identify not only the ways in which routines may develop out of interaction with the three alterities but also the trends towards, and prerequisites for, the emergence of a new culture of cooperation and interaction

    Image-Enabled Discourse: Investigating the Creation of Visual Information as Communicative Practice

    Get PDF
    Anyone who has clarified a thought or prompted a response during a conversation by drawing a picture has exploited the potential of image making as an interactive tool for conveying information. Images are increasingly ubiquitous in daily communication, in large part due to advances in visually enabled information and communication technologies (ICT), such as information visualization applications, image retrieval systems and visually enabled collaborative work tools. Human abilities to use images to communicate are however far more sophisticated and nuanced than these technologies currently support. In order to learn more about the practice of image making as a specialized form of information and communication behavior, this study examined face-to-face conversations involving the creation of ad hoc visualizations (i.e., napkin drawings ). A model of image-enabled discourse is introduced, which positions image making as a specialized form of communicative practice. Multimodal analysis of video-recorded conversations focused on identifying image-enabled communicative activities in terms of interactional sociolinguistic concepts of conversational involvement and coordination, specifically framing, footing and stance. The study shows that when drawing occurs in the context of an ongoing dialogue, the activity of visual representation performs key communicative tasks. Visualization is a form of social interaction that contributes to the maintenance of conversational involvement in ways that are not often evident in the image artifact. For example, drawing enables us to coordinate with each other, to introduce alternative perspectives into a conversation and even to temporarily suspend the primary thread of a discussion in order to explore a tangential thought. The study compares attributes of the image artifact with those of the activity of image making, described as a series of contrasting affordances. Visual information in complex systems is generally represented and managed based on the affordances of the artifact, neglecting to account for all that is communicated through the situated action of creating. These finding have heuristic and best-practice implications for a range of areas related to the design and evaluation of virtual collaboration environments, visual information extraction and retrieval systems, and data visualization tools
    corecore