130 research outputs found
Mitigating User Frustration through Adaptive Feedback based on Human-Automation Etiquette Strategies
The objective of this study is to investigate the effects of feedback and user frustration in human-computer interaction (HCI) and examine how to mitigate user frustration through feedback based on human-automation etiquette strategies. User frustration in HCI indicates a negative feeling that occurs when efforts to achieve a goal are impeded. User frustration impacts not only the communication with the computer itself, but also productivity, learning, and cognitive workload. Affect-aware systems have been studied to recognize user emotions and respond in different ways. Affect-aware systems need to be adaptive systems that change their behavior depending on users’ emotions. Adaptive systems have four categories of adaptations. Previous research has focused on primarily function allocation and to a lesser extent information content and task scheduling. However, the fourth approach, changing the interaction styles is the least explored because of the interplay of human factors considerations. Three interlinked studies were conducted to investigate the consequences of user frustration and explore mitigation techniques. Study 1 showed that delayed feedback from the system led to higher user frustration, anger, cognitive workload, and physiological arousal. In addition, delayed feedback decreased task performance and system usability in a human-robot interaction (HRI) context. Study 2 evaluated a possible approach of mitigating user frustration by applying human-human etiquette strategies in a tutoring context. The results of Study 2 showed that changing etiquette strategies led to changes in performance, motivation, confidence, and satisfaction. The most effective etiquette strategies changed when users were frustrated. Based on these results, an adaptive tutoring system prototype was developed and evaluated in Study 3. By utilizing a rule set derived from Study 2, the tutor was able to use different automation etiquette strategies to target and improve motivation, confidence, satisfaction, and performance using different strategies, under different levels of user frustration. This work establishes that changing the interaction style alone of a computer tutor can affect a user’s motivation, confidence, satisfaction, and performance. Furthermore, the beneficial effect of changing etiquette strategies is greater when users are frustrated. This work provides a basis for future work to develop affect-aware adaptive systems to mitigate user frustration
Learning Data-Driven Models of Non-Verbal Behaviors for Building Rapport Using an Intelligent Virtual Agent
There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness.
Evidence-based patient-centered Brief Motivational Interviewing (BMI) interven- tions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge.
Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary.
Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems.
To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14]
Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems
As robotic systems are moved out of factory work cells into human-facing
environments questions of choreography become central to their design,
placement, and application. With a human viewer or counterpart present, a
system will automatically be interpreted within context, style of movement, and
form factor by human beings as animate elements of their environment. The
interpretation by this human counterpart is critical to the success of the
system's integration: knobs on the system need to make sense to a human
counterpart; an artificial agent should have a way of notifying a human
counterpart of a change in system state, possibly through motion profiles; and
the motion of a human counterpart may have important contextual clues for task
completion. Thus, professional choreographers, dance practitioners, and
movement analysts are critical to research in robotics. They have design
methods for movement that align with human audience perception, can identify
simplified features of movement for human-robot interaction goals, and have
detailed knowledge of the capacity of human movement. This article provides
approaches employed by one research lab, specific impacts on technical and
artistic projects within, and principles that may guide future such work. The
background section reports on choreography, somatic perspectives,
improvisation, the Laban/Bartenieff Movement System, and robotics. From this
context methods including embodied exercises, writing prompts, and community
building activities have been developed to facilitate interdisciplinary
research. The results of this work is presented as an overview of a smattering
of projects in areas like high-level motion planning, software development for
rapid prototyping of movement, artistic output, and user studies that help
understand how people interpret movement. Finally, guiding principles for other
groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for
the 21st Century)"
http://www.mdpi.com/journal/arts/special_issues/Machine_Artis
Real-time generation and adaptation of social companion robot behaviors
Social robots will be part of our future homes.
They will assist us in everyday tasks, entertain us, and provide helpful advice.
However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate.
An essential skill of every social robot is verbal and non-verbal communication.
In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine.
Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors.
In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot.
However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems.
This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences.
Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence.
The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning.
Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning.
It provides a higher-level view from the system designer's perspective and guidance from the start to the end.
It illustrates the process of modeling, simulating, and evaluating such adaptation processes.
Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness.
The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes.
They are evaluated in the lab and in in-situ studies.
These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor.
Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukĂĽnftigen Zuhauses sein.
Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen.
Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen.
Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation.
Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt.
Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natĂĽrliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen.
DarĂĽber hinaus mĂĽssen Roboter auch die individuellen Vorlieben der Benutzer berĂĽcksichtigen.
So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt.
Roboter haben jedoch keine menschliche Intuition - sie mĂĽssen mit entsprechenden Algorithmen fĂĽr diese Probleme ausgestattet werden.
In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen.
Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern.
Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen.
Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt.
Er bietet eine ĂĽbergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende.
Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse.
Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten.
Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden.
Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt.
In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an
Artificial Emotional Intelligence in Socially Assistive Robots
Artificial Emotional Intelligence (AEI) bridges the gap between humans and machines by demonstrating empathy and affection towards each other. This is achieved by evaluating the emotional state of human users, adapting the machine’s behavior to them, and hence giving an appropriate response to those emotions. AEI is part of a larger field of studies called Affective Computing. Affective computing is the integration of artificial intelligence, psychology, robotics, biometrics, and many more fields of study. The main component in AEI and affective computing is emotion, and how we can utilize emotion to create a more natural and productive relationship between humans and machines.
An area in which AEI can be particularly beneficial is in building machines and robots for healthcare applications. Socially Assistive Robotics (SAR) is a subfield in robotics that aims at developing robots that can provide companionship to assist people with social interaction and companionship. For example, residents living in housing designed for older adults often feel lonely, isolated, and depressed; therefore, having social interaction and mental stimulation is critical to improve their well-being. Socially Assistive Robots are designed to address these needs by monitoring and improving the quality of life of patients with depression and dementia. Nevertheless, developing robots with AEI that understand users’ emotions and can reply to them naturally and effectively is in early infancy, and much more research needs to be carried out in this field.
This dissertation presents the results of my work in developing a social robot, called Ryan, equipped with AEI for effective and engaging dialogue with older adults with depression and dementia. Over the course of this research there has been three versions of Ryan. Each new version of Ryan is created using the lessons learned after conducting the studies presented in this dissertation. First, two human-robot-interaction studies were conducted showing validity of using a rear-projected robot to convey emotion and intent. Then, the feasibility of using Ryan to interact with older adults is studied. This study investigated the possible improvement of the quality of life of older adults. Ryan the Companionbot used in this project is a rear-projected lifelike conversational robot. Ryan is equipped with many features such as games, music, video, reminders, and general conversation. Ryan engages users in cognitive games and reminiscence activities. A pilot study was conducted with six older adults with early-stage dementia and/or depression living in a senior living facility. Each individual had 24/7 access to a Ryan in his/her room for a period of 4-6 weeks. The observations of these individuals, interviews with them and their caregivers, and analysis of their interactions during this period revealed that they established rapport with the robot and greatly valued and enjoyed having a companionbot in their room.
A multi-modal emotion recognition algorithm was developed as well as a multi-modal emotion expression system. These algorithms were then integrated into Ryan. To engage the subjects in a more empathic interaction with Ryan, a corpus of dialogues on different topics were created by English major students. An emotion recognition algorithm was designed and implemented and then integrated into the dialogue management system to empathize with users based on their perceived emotion. This study investigates the effects of this emotionally intelligent robot on older adults in the early stage of depression and dementia. The results of this study suggest that Ryan equipped with AEI is more engaging, likable, and attractive to users than Ryan without AEI. The long-term effect of the last version of Ryan (Ryan V3.0) was studied in a study involving 17 subjects from 5 different senior care facilities. The participants in this study experienced a general improvement in their cognitive and depression scores
Recommended from our members
Blurring the Line Between Human and Machine: Marketing Artificial Intelligence
One of the most prominent and potentially transformative trends in society today is machines becoming more human-like, driven by progress in artificial intelligence. How this trend will impact individuals, private and public organizations, and society as a whole is still unknown, and depends largely on how individual consumers choose to adopt and use these technologies. This dissertation focuses on understanding how consumers perceive, adopt, and use technologies that blur the line between human and machine, with two primary goals. First, I build on psychological and philosophical theories of mind perception, anthropomorphism, and dehumanization, and on management research into technology adoption, in order to develop a theoretical understanding of the forces that shape consumer adoption of these technologies. Second, I develop practical marketing interventions that can be used to influence patterns of adoption according to the desired outcome.
This dissertation is organized as follows. Essay 1 develops a conceptual framework for understanding what AI is, what it can do, and what are some of the key antecedents and consequences of its’ adoption. The subsequent two Essays test various parts of this framework. Essay 2 explores consumers’ willingness to use algorithms to perform tasks normally done by humans, focusing specifically on how the nature of the task for which algorithms are used and the human-likeness of the algorithm itself impact consumers’ use of the algorithm. Essay 3 focuses on the use of social robots in consumption contexts, specifically addressing the role of robots’ physical and mental human-likeness in shaping consumers’ comfort with and perceived usefulness of such robots.
Together, these three Essays offer an empirically supported conceptual structure ¬for marketing researchers and practitioners to understand artificial intelligence and influence the processes through which consumers perceive and adopt it. Artificial intelligence has the potential to create enormous value for consumers, firms, and society, but also poses many profound challenges and risks. A better understanding of how this transformative technology is perceived and used can potentially help to maximize its potential value and minimize its risks
What do Collaborations with the Arts Have to Say About Human-Robot Interaction?
This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan
- …