218 research outputs found

    The Look of Fear from the Eyes Varies with the Dynamic Sequence of Facial Actions

    Get PDF
    Most research on the ability to interpret expressions from the eyes has utilized static information. This research investigates whether the dynamic sequence of facial actions in the eye region influences the judgments of perceivers. Dynamic fear expressions involving the eye region and eyebrows were created which systematically differed in the sequential occurrence of facial actions. Participants rated the intensity of sequential fear expressions, either in addition to a simultaneous, full-blown expression (Experiment 1) or in combination with different levels of eye gaze (Experiment 2). The results showed that the degree of attributed emotion and the appraisal ratings differed as a function of the sequence of facial expressions of fear, with direct gaze resulting in stronger subjective responses. The findings challenge current notions surrounding the study of static facial displays from the eyes and suggest that emotion perception is a dynamic process shaped by the time course of the facial actions of an expression. Possible implications for the field of affective computing and clinical research are discussed

    Towards Living Machines: current and future trends of tactile sensing, grasping, and social robotics

    Get PDF
    The development of future technologies can be highly influenced by our deeper understanding of the principles that underlie living organisms. The Living Machines conference aims at presenting (among others) the interdisciplinary work of behaving systems based on such principles. Celebrating the 10 years of the conference, we present the progress and future challenges of some of the key themes presented in the robotics workshop of the Living Machines conference. More specifically, in this perspective paper, we focus on the advances in the field of biomimetics and robotics for the creation of artificial systems that can robustly interact with their environment, ranging from tactile sensing, grasping, and manipulation to the creation of psychologically plausible agents

    The effects of user assistance systems on user perception and behavior

    Get PDF
    The rapid development of information technology (IT) is changing how people approach and interact with IT systems (Maedche et al. 2016). IT systems can increasingly support people in performing ever more complex tasks (Vtyurina and Fourney 2018). However, people's cognitive abilities have not evolved as quickly as technology (Maedche et al. 2016). Thus, different external factors (e.g., complexity or uncertainty) and internal conditions (e.g., cognitive load or stress) reduce decision quality (Acciarini et al. 2021; Caputo 2013; Hilbert 2012). User-assistance systems (UASs) can help to compensate for human weaknesses and cope with new challenges. UASs aim to improve the user's cognition and capabilities, benefiting individuals, organizations, and society. To achieve this goal, UASs collect, prepare, aggregate, analyze information, and communicate results according to user preferences (Maedche et al. 2019). This support can relieve users and improve the quality of decision-making. Using UASs offers many benefits but requires successful interaction between the user and the UAS. However, this interaction introduces social and technical challenges, such as loss of control or reduced explainability, which can affect user trust and willingness to use the UAS (Maedche et al. 2019). To realize the benefits, UASs must be developed based on an understanding and incorporation of users' needs. Users and UASs are part of a socio-technical system to complete a specific task (Maedche et al. 2019). To create a benefit from the interaction, it is necessary to understand the interaction within the socio-technical system, i.e., the interaction between the user, UAS, and task, and to align the different components. For this reason, this dissertation aims to extend the existing knowledge on UAS design by better understanding the effects and mechanisms during the interaction between UASs and users in different application contexts. Therefore, theory and findings from different disciplines are combined and new theoretical knowledge is derived. In addition, data is collected and analyzed to validate the new theoretical knowledge empirically. The findings can be used to reduce adaptation barriers and realize a positive outcome. Overall this dissertation addresses the four classes of UASs presented by Maedche et al. (2016): basic UASs, interactive UASs, intelligent UASs, and anticipating UASs. First, this dissertation contributes to understanding how users interact with basic UASs. Basic UASs do not process contextual information and interact little with the user (Maedche et al. 2016). This behavior makes basic UASs suitable for application contexts, such as social media, where little interaction is desired. Social media is primarily used for entertainment and focuses on content consumption (Moravec et al. 2018). As a result, social media has become an essential source of news but also a target for fake news, with negative consequences for individuals and society (Clarke et al. 2021; Laato et al. 2020). Thus, this thesis presents two approaches to how basic UASs can be used to reduce the negative influence of fake news. Firstly, basic UASs can provide interventions by warning users of questionable content and providing verified information but the order in which the intervention elements are displayed influences the fake news perception. The intervention elements should be displayed after the fake news story to achieve an efficient intervention. Secondly, basic UASs can provide social norms to motivate users to report fake news and thereby stop the spread of fake news. However, social norms should be used carefully, as they can backfire and reduce the willingness to report fake news. Second, this dissertation contributes to understanding how users interact with interactive UASs. Interactive UASs incorporate limited information from the application context but focus on close interaction with the user to achieve a specific goal or behavior (Maedche et al. 2016). Typical goals include more physical activity, a healthier diet, and less tobacco and alcohol consumption to prevent disease and premature death (World Health Organization 2020). To increase goal achievement, previous researchers often utilize digital human representations (DHRs) such as avatars and embodied agents to form a socio-technical relationship between the user and the interactive UAS (Kim and Sundar 2012a; Pfeuffer et al. 2019). However, understanding how the design features of an interactive UAS affect the interaction with the user is crucial, as each design feature has a distinct impact on the user's perception. Based on existing knowledge, this thesis highlights the most widely used design features and analyzes their effects on behavior. The findings reveal important implications for future interactive UAS design. Third, this dissertation contributes to understanding how users interact with intelligent UASs. Intelligent UASs prioritize processing user and contextual information to adapt to the user's needs rather than focusing on an intensive interaction with the user (Maedche et al. 2016). Thus, intelligent UASs with emotional intelligence can provide people with task-oriented and emotional support, making them ideal for situations where interpersonal relationships are neglected, such as crowd working. Crowd workers frequently work independently without any significant interactions with other people (JÀger et al. 2019). In crowd work environments, traditional leader-employee relationships are usually not established, which can have a negative impact on employee motivation and performance (Cavazotte et al. 2012). Thus, this thesis examines the impact of an intelligent UAS with leadership and emotional capabilities on employee performance and enjoyment. The leadership capabilities of the intelligent UAS lead to an increase in enjoyment but a decrease in performance. The emotional capabilities of the intelligent UAS reduce the stimulating effect of leadership characteristics. Fourth, this dissertation contributes to understanding how users interact with anticipating UASs. Anticipating UASs are intelligent and interactive, providing users with task-related and emotional stimuli (Maedche et al. 2016). They also have advanced communication interfaces and can adapt to current situations and predict future events (Knote et al. 2018). Because of these advanced capabilities anticipating UASs enable collaborative work settings and often use anthropomorphic design cues to make the interaction more intuitive and comfortable (André et al. 2019). However, these anthropomorphic design cues can also raise expectations too high, leading to disappointment and rejection if they are not met (Bartneck et al. 2009; Mori 1970). To create a successful collaborative relationship between anticipating UASs and users, it is important to understand the impact of anthropomorphic design cues on the interaction and decision-making processes. This dissertation presents a theoretical model that explains the interaction between anthropomorphic anticipating UASs and users and an experimental procedure for empirical evaluation. The experiment design lays the groundwork for empirically testing the theoretical model in future research. To sum up, this dissertation contributes to information systems knowledge by improving understanding of the interaction between UASs and users in different application contexts. It develops new theoretical knowledge based on previous research and empirically evaluates user behavior to explain and predict it. In addition, this dissertation generates new knowledge by prototypically developing UASs and provides new insights for different classes of UASs. These insights can be used by researchers and practitioners to design more user-centric UASs and realize their potential benefits

    Facial behavior

    Get PDF
    : We provide an overview of the current state-of-the-art regarding research on facial behavior from what we hope is a well-balanced historical perspective. Based on a critical discussion of the main theoretical views of nonverbal facial activity (i.e., affect program theory, appraisal theory, dimensional theory, behavioral ecology), we focus on some key issues regarding the cohesion of emotion and expression, including the issue of “genuine smiles.” We argue that some of the challenges faced by the field are a consequence of these theoretical positions, their assumptions, and we discuss how they have generated and shaped research. A clear distinction of encoding and decoding processes may prove beneficial to identify specific problems – for example the use of posed expressions in facial expression research, or the impact of the psychological situation on the perceiver. We argue that knowledge of the functions of facial activity may be central to understanding what facial activity is truly about; this includes a serious consideration of social context at all stages of encoding and decoding. The chapter concludes with a brief overview of recent technical advances and challenges highlighted by the new field of “affective computing” concerned with facial activity

    Designing Embodied Interactive Software Agents for E-Learning: Principles, Components, and Roles

    Get PDF
    Embodied interactive software agents are complex autonomous, adaptive, and social software systems with a digital embodiment that enables them to act on and react to other entities (users, objects, and other agents) in their environment through bodily actions, which include the use of verbal and non-verbal communicative behaviors in face-to-face interactions with the user. These agents have been developed for various roles in different application domains, in which they perform tasks that have been assigned to them by their developers or delegated to them by their users or by other agents. In computer-assisted learning, embodied interactive pedagogical software agents have the general task to promote human learning by working with students (and other agents) in computer-based learning environments, among them e-learning platforms based on Internet technologies, such as the Virtual Linguistics Campus (www.linguistics-online.com). In these environments, pedagogical agents provide contextualized, qualified, personalized, and timely assistance, cooperation, instruction, motivation, and services for both individual learners and groups of learners. This thesis develops a comprehensive, multidisciplinary, and user-oriented view of the design of embodied interactive pedagogical software agents, which integrates theoretical and practical insights from various academic and other fields. The research intends to contribute to the scientific understanding of issues, methods, theories, and technologies that are involved in the design, implementation, and evaluation of embodied interactive software agents for different roles in e-learning and other areas. For developers, the thesis provides sixteen basic principles (Added Value, Perceptible Qualities, Balanced Design, Coherence, Consistency, Completeness, Comprehensibility, Individuality, Variability, Communicative Ability, Modularity, Teamwork, Participatory Design, Role Awareness, Cultural Awareness, and Relationship Building) plus a large number of specific guidelines for the design of embodied interactive software agents and their components. Furthermore, it offers critical reviews of theories, concepts, approaches, and technologies from different areas and disciplines that are relevant to agent design. Finally, it discusses three pedagogical agent roles (virtual native speaker, coach, and peer) in the scenario of the linguistic fieldwork classes on the Virtual Linguistics Campus and presents detailed considerations for the design of an agent for one of these roles (the virtual native speaker)

    A Retro-Projected Robotic Head for Social Human-Robot Interaction

    Get PDF
    As people respond strongly to faces and facial features, both con- sciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an orig- inal contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving soft- ware designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial ex- pressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncon- trolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evalua- tions present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face

    PROTOTYPING RELATIONAL THINGS THAT TALK: A DISCURSIVE DESIGN STRATEGY FOR CONVERSATIONAL AI SYSTEMS

    Get PDF
    This practice-based research inquiry explores the implications of conversational Artificial Intelligence (AI) systems, ‘relational things that talk’, on the way people experience the world. It responds directly to the pervasive lack of ethical design frameworks for commercial AI systems, compounded by limited transparency, ubiquitous authority, embedded bias and the absence of diversity in the development process. The effect produced by relational things that talk upon the feelings, thoughts or intentions of the user is here defined as the ‘perlocutionary effect’ of conversational AI systems. This effect is constituted by these systems’ ‘relationality‘ and ‘persuasiveness’, propagated by the system’s embedded bias and ‘hybrid intentions’, relative to a user’s susceptibility. The proposition of the perlocutionary effect frames the central practice of this thesis and the contribution to new knowledge which manifests as four discursive prototypes developed through a participatory method. Each prototype demonstrates the factors that constitute and propagate the perlocutionary effect. These prototypes also function as instruments which actively engage participants in a counter-narrative as a form of activism. ‘This Is Where We Are’ (TIWWA), explores the persuasiveness and relationality of relational things powered through AI behavioural algorithms and directed by pools of user data. ‘Emoti-OS’, iterates the findings from TIWWA and analyses the construction of relationality through simulated affect, personality and collective (artificial) emotional intelligence. ‘Women Reclaiming AI’ (WRAI), demonstrates stereotyping and bias in commercial conversational AI developments. The last prototype, ‘The Infinite Guide’, synthesises and tests the findings from the three previous prototypes to substantiate the overall perlocutionary effect of conversational AI system. In so doing, this inquiry proposes the appropriation of relational things that talk as a discursive design strategy, extended with a participatory method, for new forms of cultural expression and social action, which activate people to demand more ethical AI systems

    An Examination of a Theory of Embodied Social Presence in Virtual Worlds

    Get PDF
    In this article, we discuss and empirically examine the importance of embodiment, context, and spatial proximity as they pertain to collaborative interaction and task completion in virtual environments. Specifically, we introduce the embodied social presence (ESP) theory as a framework to account for a higher level of perceptual engagement that users experience as they engage in activity-based social interaction in virtual environments. The ESP theory builds on the analysis of reflection data from Second Life users to explain the process by which perceptions of ESP are realized. We proceed to describe implications of ESP for collaboration and other organizational functions
    • 

    corecore