11 research outputs found
Forms and Frames: Mind, Morality, and Trust in Robots across Prototypical Interactions
People often engage human-interaction schemas in human-robot interactions, so notions of prototypicality are useful in examining how interactions’ formal features shape perceptions of social robots. We argue for a typology of three higher-order interaction forms (social, task, play) comprising identifiable-but-variable patterns in agents, content, structures, outcomes, context, norms. From that ground, we examined whether participants’ judgments about a social robot (mind, morality, and trust perceptions) differed across prototypical interactions. Findings indicate interaction forms somewhat influence trust but not mind or morality evaluations. However, how participants perceived interactions (independent of form) were more impactful. In particular, perceived task interactions fostered functional trust, while perceived play interactions fostered moral trust and attitude shift over time. Hence, prototypicality in interactions should not consider formal properties alone but must also consider how people perceive interactions according to prototypical frames
Learning Legible Motion from Human–Robot Interactions
International audienceIn collaborative tasks, displaying legible behavior enables other members of the team to anticipate intentions and to thus coordinate their actions accordingly. Behavior is therefore considered to be legible when an observer is able to quickly and correctly infer the intention of the agent generating the behavior. In previous work, legible robot behavior has been generated by using model-based methods to optimize task-specific models of legibility. In our work, we rather use model-free reinforcement learning with a generic, task-independent cost function. In the context of experiments involving a joint task between (thirty) human subjects and a humanoid robot, we show that: 1) legible behavior arises when rewarding the efficiency of joint task completion during human-robot interactions 2) behavior that has been optimized for one subject is also more legible for other subjects 3) the universal legibility of behavior is influenced by the choice of the policy representation. Fig. 1 Illustration of the button pressing experiment, where the robot reaches for and presses a button. The human subject predicts which button the robot will push, and is instructed to quickly press a button of the same color when sufficiently confident about this prediction. By rewarding the robot for fast and successful joint completion of the task, which indirectly rewards how quickly the human recognizes the robot's intention and thus how quickly the human can start the complementary action, the robot learns to perform more legible motion. The three example trajectories illustrate the concept of legible behavior: it enables correct prediction of the intention early on in the trajectory
Object Handovers: a Review for Robotics
This article surveys the literature on human-robot object handovers. A
handover is a collaborative joint action where an agent, the giver, gives an
object to another agent, the receiver. The physical exchange starts when the
receiver first contacts the object held by the giver and ends when the giver
fully releases the object to the receiver. However, important cognitive and
physical processes begin before the physical exchange, including initiating
implicit agreement with respect to the location and timing of the exchange.
From this perspective, we structure our review into the two main phases
delimited by the aforementioned events: 1) a pre-handover phase, and 2) the
physical exchange. We focus our analysis on the two actors (giver and receiver)
and report the state of the art of robotic givers (robot-to-human handovers)
and the robotic receivers (human-to-robot handovers). We report a comprehensive
list of qualitative and quantitative metrics commonly used to assess the
interaction. While focusing our review on the cognitive level (e.g.,
prediction, perception, motion planning, learning) and the physical level
(e.g., motion, grasping, grip release) of the handover, we briefly discuss also
the concepts of safety, social context, and ergonomics. We compare the
behaviours displayed during human-to-human handovers to the state of the art of
robotic assistants, and identify the major areas of improvement for robotic
assistants to reach performance comparable to human interactions. Finally, we
propose a minimal set of metrics that should be used in order to enable a fair
comparison among the approaches.Comment: Review paper, 19 page
Curiosity-Based Learning Algorithm for Interactive Art Sculptures
This thesis is part of the research activities of the Living Architecture System Group (LASG). Combining techniques in architecture, the arts, electronics, and software, LASG develops interactive art sculptures that engage occupants in an immersive environment. The overarching goal of this research is to develop architectural systems that possess life-like qualities. Recent advances in miniaturization of computing and sensing units enable system-wide responsive behaviours. Though complexity may emerge in current LASG systems through superposition of a set of simple and prescripted behaviours, the responses of the systems to occupants remain rather robotic and ultimately dictated by the will of the designers. In this thesis, a new series of sculptural system was initiated, implementing an additional layer of behavioural autonomy.
In this thesis, the Curiosity-Based Learning Algorithm (CBLA), a reinforcement learning algorithm which selects actions that lead to maximum potential knowledge gains, is introduced to enable the sculpture to automatically generate interactive behaviours and adapt to changes. The CBLA allows the sculptural system to construct models of its own mechanisms and its surroundings through self-experimentation and interaction with human occupants. A novel formulation using multiple learning agents, each comprising a subset of the system, was developed in order to integrate a large number of sensors and actuators. These agents form a network of independent, asynchronous CBLA Nodes that share information about localized events through shared sensors and virtual inputs. Given different network configurations of the CBLA system, the emergence of system behaviours with varying activation patterns was observed.
To realize the CBLA system on a physical interactive art sculpture, an overhaul of the previous series' interactive control hardware was necessary. CBLA requires the system to be able to sense the consequences of its own actions and its surrounding at a much higher resolution and frequency than previously implemented behaviour algorithms. This translates to the need to interface and collect samples from a substantially larger number of sensors. A new series of hardware as well as control system software was developed, which enables the control and sampling of hundreds of devices on a centralized computer through USB connections. Moving the computation from an embedded platform simplifies the implementation of the CBLA system, which is a computationally intensive and complex program. In addition, the large amount of data generated by the system can now be recorded without sacrificing response time nor resolution.
An experimental test bed was built to validate the behaviours of the CBLA system. This small-scale interactive art sculpture resembles previous sculptures displayed publicly by the LASG and Philip Beesley Architect Inc (PBAI). Experiments were done on the testbed at PBAI's Toronto studios, to demonstrate the exploratory patterns of CBLA as well as the collective learning behaviours produced by the CBLA system. Furthermore, a user study was conducted to better understand users' responses to this new form of interactive behaviour. Comparing with prescripted behaviours that were explicitly programmed, the participants of the study did not find this implementation of the CBLA system more interesting. However, the positive correlations between activation level, responsiveness, and users' interest levels revealed insights about users' preferences and perceptions of the system. In addition, observations during the trials and the responses from the questionnaires showed a wide variety of user behaviours and expectations. This suggests that, in future work, results should be categorized to analyze how different types of users respond to the sculpture. Moreover, the experiments should also be designed to better reflect the actual use cases of the sculpture
Human-Machine Communication: Complete Volume. Volume 2
This is the complete volume of HMC Volume 2