7,033 research outputs found

    Preferred Interaction Styles for Human-Robot Collaboration Vary Over Tasks With Different Action Types

    Get PDF
    How do humans want to interact with collaborative robots? As robots become more common and useful not only in industry but also in the home, they will need to interact with humans to complete many varied tasks. Previous studies have demonstrated that autonomous robots are often more efficient and preferred over those that need to be commanded, or those that give instructions to humans. We believe that the types of actions that make up a task affect the preference of participants for different interaction styles. In this work, our goal is to explore tasks with different action types together with different interaction styles to find the specific situations in which different interaction styles are preferred. We have identified several classifications for table-top tasks and have developed a set of tasks that vary along two of these dimensions together with a set of different interaction styles that the robot can use to choose actions. We report on results from a series of human-robot interaction studies involving a PR2 completing table-top tasks with a human. The results suggest that people prefer robot-led interactions for tasks with a higher cognitive load and human-led interactions for joint actions

    Social Roles and Baseline Proxemic Preferences for a Domestic Service Robot

    Get PDF
    © The Author(s) 2014. This article is published with open access at Springerlink.com. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. The work described in this paper was conducted within the EU Integrated Projects LIREC (LIving with Robots and intEractive Companions, funded by the European Commission under contract numbers FP7 215554, and partly funded by the ACCOMPANY project, a part of the European Union’s Seventh Framework Programme (FP7/2007–2013) under grant agreement n287624The goal of our research is to develop socially acceptable behavior for domestic robots in a setting where a user and the robot are sharing the same physical space and interact with each other in close proximity. Specifically, our research focuses on approach distances and directions in the context of a robot handing over an object to a userPeer reviewe

    Hands-Off Therapist Robot Behavior Adaptation to User Personality for Post-Stroke Rehabilitation Therapy

    Get PDF
    This paper describes a hands-off therapist robot that monitors, assists, encourages, and socially interacts with post-stroke users in the process of rehabilitation exercises. We developed a behavior adaptation system that takes advantage of the users introversion-extroversion personality trait and the number of exercises performed in order to adjust its social interaction parameters (e.g., interaction distances/proxemics, speed, and vocal content) toward a customized post-stroke rehabilitation therapy. The experimental results demonstrate the robot's autonomous behavior adaptation to the user's personality and the resulting user improvements of the exercise task performance

    Putting a Face on Algorithms: Personas for Modeling Artificial Intelligence

    Get PDF
    We propose a new type of personas, artificial intelligence (AI) personas, as a tool for designing systems consisting of both human and AI agents. Personas are commonly used in design practices for modelling users. We argue that the personification of AI agents can help multidisciplinary teams in understanding and designing systems that include AI agents. We propose a process for creating AI personas and the properties they should include, and report on our first experience using them. The case we selected for our exploration of AI personas was the design of a highly automated decision support tool for air traffic control. Our first results indicate that AI personas helped designers to empathise with algorithms and enabled better communication within a team of designers and AI and domain experts. We call for a research agenda on AI personas and discussions on potential benefits and pitfalls of this approach.acceptedVersio

    Task Planning and Execution for Human Robot Team Performing a Shared Task in a Shared Workspace

    Get PDF
    A cyber-physical system is developed to enable a human-robot team to perform a shared task in a shared workspace. The system setup is suitable for the implementation of a tabletop manipulation task, a common human-robot collaboration scenario. The system integrates elements that exist in the physical (real) and the virtual world. In this work, we report the insights we gathered throughout our exploration in understanding and implementing task planning and execution for human-robot team

    Demonstration of Object Recognition Using DOPE Deep Learning Algorithm for Collaborative Robotics

    Get PDF
    When collaborating on a common task, passing, or receiving various objects such as tools between each other is one of the most common interaction methods among humans. Similarly, it is expected to be a common and important interaction method in a fluent and natural human-robot collaboration. This thesis studied human-robot-interaction in the context of unilateral robot-to-human handover task. More specifically, it focused on studying grasping an object using a state-of-the-art machine learning algorithm called Guided Uncertainty-Aware Policy Optimization (GUAPO). Within the broader scope of the whole GUAPO algorithm, it was limited to only demonstrating the object detection and pose estimation part of the task. In this case, it was implemented using an object pose estimation algorithm called Deep Object Pose Estimation (DOPE). DOPE is a deep learning approach to predict image key points from a large-enough set of training data of an object-of-interest. The challenge of having enough training data for teaching a supervised machine learning-based machine vision algorithm was tackled by creating a synthetic (computer generated) dataset. The dataset needed to represent the real-life scenario closely to beat the so-called reality-gap. This dataset was created with Unreal Engine 4 (UE4) and NVIDIA Deep learning Dataset Synthesizer (NDDS). During the experimental part, a 3D model of the object-of-interest was created using Blender and the object was imported into the created UE4 environment. NDDS was used to create and extract the training dataset for DOPE. DOPE’s functionality was successfully tested with a pre-trained network and then it was manually shown that it is possible to start training the DOPE algorithm with the dataset created. However, the lack of computing power became the limitation of this work, and it was not possible to train the DOPE algorithm enough to recognize the object-of-interest. The results prove this to be an effective way to approach training object recognition algorithms, albeit being technologically challenging to do from scratch, as knowledge of broad sets of software and programming skills are needed
    corecore