7,035 research outputs found
Preferred Interaction Styles for Human-Robot Collaboration Vary Over Tasks With Different Action Types
How do humans want to interact with collaborative robots? As robots become more common and useful not only in industry but also in the home, they will need to interact with humans to complete many varied tasks. Previous studies have demonstrated that autonomous robots are often more efficient and preferred over those that need to be commanded, or those that give instructions to humans. We believe that the types of actions that make up a task affect the preference of participants for different interaction styles. In this work, our goal is to explore tasks with different action types together with different interaction styles to find the specific situations in which different interaction styles are preferred. We have identified several classifications for table-top tasks and have developed a set of tasks that vary along two of these dimensions together with a set of different interaction styles that the robot can use to choose actions. We report on results from a series of human-robot interaction studies involving a PR2 completing table-top tasks with a human. The results suggest that people prefer robot-led interactions for tasks with a higher cognitive load and human-led interactions for joint actions
Social Roles and Baseline Proxemic Preferences for a Domestic Service Robot
© The Author(s) 2014. This article is published with open access at Springerlink.com. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. The work described in this paper was conducted within the EU Integrated Projects LIREC (LIving with Robots and intEractive Companions, funded by the European Commission under contract numbers FP7 215554, and partly funded by the ACCOMPANY project, a part of the European Union’s Seventh Framework Programme (FP7/2007–2013) under grant agreement n287624The goal of our research is to develop socially acceptable behavior for domestic robots in a setting where a user and the robot are sharing the same physical space and interact with each other in close proximity. Specifically, our research focuses on approach distances and directions in the context of a robot handing over an object to a userPeer reviewe
Hands-Off Therapist Robot Behavior Adaptation to User Personality for Post-Stroke Rehabilitation Therapy
This paper describes a hands-off therapist robot that monitors, assists, encourages, and socially interacts with post-stroke users in the process of rehabilitation exercises. We developed a behavior adaptation system that takes advantage of the users introversion-extroversion personality trait and the number of exercises performed in order to adjust its social interaction parameters (e.g., interaction distances/proxemics, speed, and vocal content) toward a customized post-stroke rehabilitation therapy. The experimental results demonstrate the robot's autonomous behavior adaptation to the user's personality and the resulting user improvements of the exercise task performance
Putting a Face on Algorithms: Personas for Modeling Artificial Intelligence
We propose a new type of personas, artificial intelligence (AI) personas, as a tool for designing systems consisting of both human and AI agents. Personas are commonly used in design practices for modelling users. We argue that the personification of AI agents can help multidisciplinary teams in understanding and designing systems that include AI agents. We propose a process for creating AI personas and the properties they should include, and report on our first experience using them. The case we selected for our exploration of AI personas was the design of a highly automated decision support tool for air traffic control. Our first results indicate that AI personas helped designers to empathise with algorithms and enabled better communication within a team of designers and AI and domain experts. We call for a research agenda on AI personas and discussions on potential benefits and pitfalls of this approach.acceptedVersio
Recommended from our members
Gender differences in navigation dialogues with computer systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Gender is among the most influential of the factors underlying differences in spatial abilities, human communication and interactions with and through computers. Past research has offered important insights into gender differences in navigation and language use. Yet, given the multidimensionality of these domains, many issues remain contentious while others unexplored. Moreover, having been derived from non-interactive, and often artificial, studies, the generalisability of this research to interactive contexts of use, particularly in the practical domain of Human-Computer Interaction (HCI), may be problematic. At the same time, little is known about how gender strategies, behaviours and preferences interact with the features of technology in various domains of HCI, including collaborative systems and systems with natural language interfaces. Targeting these knowledge gaps, the thesis aims to address the central question of how gender differences emerge and operate in spatial navigation dialogues with computer systems.
To this end, an empirical study is undertaken, in which, mixed-gender and same-gender pairs communicate to complete an urban navigation task, with one of the participants being under the impression that he/she interacts with a robot. Performance and dialogue data were collected using a custom system that supported synchronous navigation and communication between the user and the robot.
Based on this empirical data, the thesis describes the key role of the interaction of gender in navigation performance and communication processes, which outweighed the effect of individual gender, moderating gender differences and reversing predicted patterns of performance and language use. This thesis has produced several contributions; theoretical, methodological and practical. From a theoretical perspective, it offers novel findings in gender differences in navigation and communication. The methodological contribution concerns the successful application of dialogue as a naturalistic, and yet experimentally sound, research paradigm to study gender and spatial language. The practical contributions include concrete design guidelines for natural language systems and implications for the development of gender-neutral interfaces in specific domains of HCI
Recommended from our members
Soft Morphological Computation
Soft Robotics is a relatively new area of research, where progress in material science has powered the next generation of robots, exhibiting biological-like properties such as soft/elastic tissues, compliance, resilience and more besides. One of the issues when employing soft robotics technologies is the soft nature of the interactions arising between the robot and its environment. These interactions are complex, and the their dynamics are non-linear and hard to capture with known models. In this thesis we argue that complex soft interactions
can actually be beneficial to the robot, and give rise to rich stimuli which can be used for the resolution of robot tasks. We further argue that the usefulness of these interactions depends on statistical regularities, or structure, that appear in the stimuli. To this end, robots should appropriately employ their morphology and their actions, to influence the system-environment interactions such that structure can arise in the stimuli. In this thesis we show that learning processes can be used to perform such a task. Following this rationale, this thesis proposes and supports the theory of Soft Morphological Computation (SoMComp), by which a soft robot should appropriately condition, or ‘affect’, the soft interactions to improve the quality of the physical stimuli arising from it. SoMComp is composed of four main principles, i.e.: Soft Proprioception, Soft Sensing, Soft Morphology and Soft Actuation. Each of these principles is explored in the context of haptic object recognition or object handling in soft robots. Finally, this thesis provides an overview of this research and its future directions.AHDB CP17
Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace
A cyber-physical system is developed to enable a human-robot team to perform a shared task in a shared workspace. The system setup is suitable for the implementation of a tabletop manipulation task, a common human-robot collaboration scenario. The system integrates elements that exist in the physical (real) and the virtual world. In this work, we report the insights we gathered throughout our exploration in understanding and implementing task planning and execution for human-robot team
Demonstration of Object Recognition Using DOPE Deep Learning Algorithm for Collaborative Robotics
When collaborating on a common task, passing, or receiving various objects such as tools between each other is one of the most common interaction methods among humans. Similarly, it is expected to be a common and important interaction method in a fluent and natural human-robot collaboration.
This thesis studied human-robot-interaction in the context of unilateral robot-to-human handover task. More specifically, it focused on studying grasping an object using a state-of-the-art machine learning algorithm called Guided Uncertainty-Aware Policy Optimization (GUAPO). Within the broader scope of the whole GUAPO algorithm, it was limited to only demonstrating the object detection and pose estimation part of the task. In this case, it was implemented using an object pose estimation algorithm called Deep Object Pose Estimation (DOPE). DOPE is a deep learning approach to predict image key points from a large-enough set of training data of an object-of-interest. The challenge of having enough training data for teaching a supervised machine learning-based machine vision algorithm was tackled by creating a synthetic (computer generated) dataset. The dataset needed to represent the real-life scenario closely to beat the so-called reality-gap. This dataset was created with Unreal Engine 4 (UE4) and NVIDIA Deep learning Dataset Synthesizer (NDDS).
During the experimental part, a 3D model of the object-of-interest was created using Blender and the object was imported into the created UE4 environment. NDDS was used to create and extract the training dataset for DOPE. DOPE’s functionality was successfully tested with a pre-trained network and then it was manually shown that it is possible to start training the DOPE algorithm with the dataset created. However, the lack of computing power became the limitation of this work, and it was not possible to train the DOPE algorithm enough to recognize the object-of-interest. The results prove this to be an effective way to approach training object recognition algorithms, albeit being technologically challenging to do from scratch, as knowledge of broad sets of software and programming skills are needed
- …