3,305 research outputs found
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
Challenges in Collaborative HRI for Remote Robot Teams
Collaboration between human supervisors and remote teams of robots is highly
challenging, particularly in high-stakes, distant, hazardous locations, such as
off-shore energy platforms. In order for these teams of robots to truly be
beneficial, they need to be trusted to operate autonomously, performing tasks
such as inspection and emergency response, thus reducing the number of
personnel placed in harm's way. As remote robots are generally trusted less
than robots in close-proximity, we present a solution to instil trust in the
operator through a `mediator robot' that can exhibit social skills, alongside
sophisticated visualisation techniques. In this position paper, we present
general challenges and then take a closer look at one challenge in particular,
discussing an initial study, which investigates the relationship between the
level of control the supervisor hands over to the mediator robot and how this
affects their trust. We show that the supervisor is more likely to have higher
trust overall if their initial experience involves handing over control of the
emergency situation to the robotic assistant. We discuss this result, here, as
well as other challenges and interaction techniques for human-robot
collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019
Workshop: The Challenges of Working on Social Robots that Collaborate with
People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing
Systems, May 2019, Glasgow, U
Dobby: A Conversational Service Robot Driven by GPT-4
This work introduces a robotics platform which embeds a conversational AI
agent in an embodied system for natural language understanding and intelligent
decision-making for service tasks; integrating task planning and human-like
conversation. The agent is derived from a large language model, which has
learned from a vast corpus of general knowledge. In addition to generating
dialogue, this agent can interface with the physical world by invoking commands
on the robot; seamlessly merging communication and behavior. This system is
demonstrated in a free-form tour-guide scenario, in an HRI study combining
robots with and without conversational AI capabilities. Performance is measured
along five dimensions: overall effectiveness, exploration abilities,
scrutinization abilities, receptiveness to personification, and adaptability
ΠΠΎΠ³Π½ΠΈΡΠΈΠ²Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ, Π΅ΠΌΠΎΡΠΈΠΈ ΠΈ ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠ½ΠΈ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΡΡΠΈ
Π‘ΡΡΠ΄ΠΈΡΠ°ΡΠ° ΠΏΡΠ΅Π·Π΅Π½ΡΠΈΡΠ° ΠΈΡΡΡΠ°ΠΆΡΠ²Π°ΡΠ° ΠΎΠ΄ ΠΏΠΎΠ²Π΅ΡΠ΅ Π½Π°ΡΡΠ½ΠΈ Π΄ΠΈΡΡΠΈΠΏΠ»ΠΈΠ½ΠΈ, ΠΊΠ°ΠΊΠΎ Π²Π΅ΡΡΠ°ΡΠΊΠ° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠΈΡΠ°, Π½Π΅Π²ΡΠΎΠ½Π°ΡΠΊΠΈ, ΠΏΡΠΈΡ
ΠΎΠ»ΠΎΠ³ΠΈΡΠ°, Π»ΠΈΠ½Π³Π²ΠΈΡΡΠΈΠΊΠ° ΠΈ ΡΠΈΠ»ΠΎΠ·ΠΎΡΠΈΡΠ°, ΠΊΠΎΠΈ ΠΈΠΌΠ°Π°Ρ ΠΏΠΎΡΠ΅Π½ΡΠΈΡΠ°Π» Π·Π° ΠΊΡΠ΅ΠΈΡΠ°ΡΠ΅ Π½Π° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠ½ΠΈ Π°Π½ΡΡΠΎΠΏΠΎΠΌΠΎΡΡΠ½ΠΈ Π°Π³Π΅Π½ΡΠΈ ΠΈ ΠΈΠ½ΡΠ΅ΡΠ°ΠΊΡΠΈΠ²Π½ΠΈ ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ. Π‘Π΅ ΡΠ°Π·Π³Π»Π΅Π΄ΡΠ²Π°Π°Ρ ΡΠΈΡΡΠ΅ΠΌΠΈΡΠ΅ ΠΎΠ΄ ΡΠΈΠΌΠ±ΠΎΠ»ΠΈΡΠΊΠ° ΠΈ ΠΊΠΎΠ½Π΅ΠΊΡΠΈΠΎΠ½ΠΈΡΡΠΈΡΠΊΠ° Π²Π΅ΡΡΠ°ΡΠΊΠ° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠΈΡΠ° Π·Π° ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠ°ΡΠ΅ Π½Π° ΡΠΎΠ²Π΅ΠΊΠΎΠ²ΠΈΡΠ΅ ΠΊΠΎΠ³Π½ΠΈΡΠΈΠ²Π½ΠΈ ΠΏΡΠΎΡΠ΅ΡΠΈ, ΠΌΠΈΡΠ»Π΅ΡΠ΅, Π΄ΠΎΠ½Π΅ΡΡΠ²Π°ΡΠ΅ ΠΎΠ΄Π»ΡΠΊΠΈ, ΠΌΠ΅ΠΌΠΎΡΠΈΡΠ° ΠΈ ΡΡΠ΅ΡΠ΅. Π‘Π΅ Π°Π½Π°Π»ΠΈΠ·ΠΈΡΠ°Π°Ρ ΠΌΠΎΠ΄Π΅Π»ΠΈΡΠ΅ Π²ΠΎ Π²Π΅ΡΡΠ°ΡΠΊΠ° ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠΈΡΠ° ΠΈ ΡΠΎΠ±ΠΎΡΠΈΠΊΠ° ΠΊΠΎΠΈ ΠΊΠΎΡΠΈΡΡΠ°Ρ Π΅ΠΌΠΎΡΠΈΠΈ ΠΊΠ°ΠΊΠΎ ΠΌΠ΅Ρ
Π°Π½ΠΈΠ·Π°ΠΌ Π·Π° ΠΊΠΎΠ½ΡΡΠΎΠ»Π° Π½Π° ΠΎΡΡΠ²Π°ΡΡΠ²Π°ΡΠ΅ Π½Π° ΡΠ΅Π»ΠΈΡΠ΅ Π½Π° ΡΠΎΠ±ΠΎΡΠΎΡ, ΠΊΠ°ΠΊΠΎ ΡΠ΅Π°ΠΊΡΠΈΡΠ° Π½Π° ΠΎΠ΄ΡΠ΅Π΄Π΅Π½ΠΈ ΡΠΈΡΡΠ°ΡΠΈΠΈ, Π·Π° ΠΎΠ΄ΡΠΆΡΠ²Π°ΡΠ΅ Π½Π° ΠΏΡΠΎΡΠ΅ΡΠΎΡ Π½Π° ΡΠΎΡΠΈΡΠ°Π»Π½Π° ΠΈΠ½ΡΠ΅ΡΠ°ΠΊΡΠΈΡΠ° ΠΈ Π·Π° ΡΠΎΠ·Π΄Π°Π²Π°ΡΠ΅ Π½Π° ΠΏΠΎΡΠ²Π΅ΡΠ»ΠΈΠ²ΠΈ Π°Π½ΡΡΠΎΠΏΠΎΡΠΌΡΠ½ΠΈ Π°Π³Π΅Π½ΡΠΈ.
ΠΡΠ΅Π·Π΅Π½ΡΠΈΡΠ°Π½ΠΈΡΠ΅ ΠΈΠ½ΡΠ΅ΡΠ΄ΠΈΡΡΠΈΠΏΠ»ΠΈΠ½Π°ΡΠ½ΠΈ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠ»ΠΎΠ³ΠΈΠΈ ΠΈ ΠΊΠΎΠ½ΡΠ΅ΠΏΡΠΈ ΡΠ΅ ΠΌΠΎΡΠΈΠ²Π°ΡΠΈΡΠ° Π·Π° ΡΠΎΠ·Π΄Π°Π²Π°ΡΠ΅ Π½Π° Π°Π½ΠΈΠΌΠΈΡΠ°Π½ΠΈ Π°Π³Π΅Π½ΡΠΈ ΠΊΠΎΠΈ ΠΊΠΎΡΠΈΡΡΠ°Ρ Π³ΠΎΠ²ΠΎΡ, Π³Π΅ΡΡΠΎΠ²ΠΈ, ΠΈΠ½ΡΠΎΠ½Π°ΡΠΈΡΠ° ΠΈ Π΄ΡΡΠ³ΠΈ Π½Π΅Π²Π΅ΡΠ±Π°Π»Π½ΠΈ ΠΌΠΎΠ΄Π°Π»ΠΈΡΠ΅ΡΠΈ ΠΏΡΠΈ ΠΊΠΎΠ½Π²Π΅ΡΠ·Π°ΡΠΈΡΠ° ΡΠΎ ΠΊΠΎΡΠΈΡΠ½ΠΈΡΠΈΡΠ΅ Π²ΠΎ ΠΈΠ½ΡΠ΅Π»ΠΈΠ³Π΅Π½ΡΠ½ΠΈΡΠ΅ ΠΈΠ½ΡΠ΅ΡΡΠ΅ΡΡΠΈ
Improving Search through A3C Reinforcement Learning based Conversational Agent
We develop a reinforcement learning based search assistant which can assist
users through a set of actions and sequence of interactions to enable them
realize their intent. Our approach caters to subjective search where the user
is seeking digital assets such as images which is fundamentally different from
the tasks which have objective and limited search modalities. Labeled
conversational data is generally not available in such search tasks and
training the agent through human interactions can be time consuming. We propose
a stochastic virtual user which impersonates a real user and can be used to
sample user behavior efficiently to train the agent which accelerates the
bootstrapping of the agent. We develop A3C algorithm based context preserving
architecture which enables the agent to provide contextual assistance to the
user. We compare the A3C agent with Q-learning and evaluate its performance on
average rewards and state values it obtains with the virtual user in validation
episodes. Our experiments show that the agent learns to achieve higher rewards
and better states.Comment: 17 pages, 7 figure
Virtual coaches for healthy lifestyle
Since the introduction of the idea of the software interface agent the question recurs whether these agents should be personified and graphically visualized in the interface. In this chapter we look at the use of virtual humans in the interface of healthy lifestyle coaching systems. Based on theory of persuasive communication we analyse the impact that the use of graphical interface agents may have on user experience and on the efficacy of this type of persuasive systems. We argue that research on the impact of a virtual human interface on the efficacy of these systems requires longitudinal field studies in addition to the controlled short-term user evaluations in the field of human computer interaction (HCI). We introduce Kristina, a mobile personal coaching system that monitors its userβs physical activity and that presents feedback messages to the user. We present results of field trials (N = 60, 7 weeks) in which we compare two interface conditions on a smartphone. In one condition feedback messages are presented by a virtual animated human, in the other condition they are displayed on the screen in text. Results of the field trials show that user motivation, use context and the type of device on which the feedback message is received influence the perception of the presentation format of feedback messages and the effect on compliance to the coaching regime
- β¦