1,580 research outputs found
Facilitating HRI by Mixed Reality Techniques
Renner P, Lier F, Friese F, Pfeiffer T, Wachsmuth S. Facilitating HRI by Mixed Reality Techniques. In: HRI '18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion. ACM/IEEE; 2018.Mobile robots start to appear in our everyday life, e.g., in shopping malls, airports, nursing homes or warehouses. Often, these robots are operated by non-technical staff with no prior experience/education in robotics. Additionally, as with all new technology, there is certain reservedness when it comes to accepting robots in our personal space.
In this work, we propose making use of state-of-the-art Mixed Reality (MR) technology to facilitate acceptance and interaction with mobile robots. By integrating a Microsoft HoloLens into the robot's operating space, the MR device can be used to a) visualize the robot's behavior-state and sensor data, b) visually notify the user about planned/future behavior and possible problems/obstacles of the robot, and c) to actively use the device as an additional external sensor source. Moreover, by using the HoloLens, users can operate and interact with the robot without being close to it, as the robot is able to \textit{sense with the users' eye
An Expressive Robotic Table to Enhance Social Interactions
We take initial steps into prototyping an expressive robotic table that can serve as a social mediator. The work is constructed through a rapid prototyping process consisting of five workshopbased phases with five interaction design participants. We report on the various prototyping techniques that led to the generated concept of an expressive robotic table. Our design process explores how expressive motion cues such as respiratory movements can be leveraged to mediate social interactions between people in cold outdoor environments. We conclude by discussing the implications of the different prototyping methods applied and the envisioned future directions of the work within the scope of expressive robotics
AdaptiX -- A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics
With the ongoing efforts to empower people with mobility impairments and the
increase in technological acceptance by the general public, assistive
technologies, such as collaborative robotic arms, are gaining popularity. Yet,
their widespread success is limited by usability issues, specifically the
disparity between user input and software control along the autonomy continuum.
To address this, shared control concepts provide opportunities to combine the
targeted increase of user autonomy with a certain level of computer assistance.
This paper presents the free and open-source AdaptiX XR framework for
developing and evaluating shared control applications in a high-resolution
simulation environment. The initial framework consists of a simulated robotic
arm with an example scenario in Virtual Reality (VR), multiple standard control
interfaces, and a specialized recording/replay system. AdaptiX can easily be
extended for specific research needs, allowing Human-Robot Interaction (HRI)
researchers to rapidly design and test novel interaction methods, intervention
strategies, and multi-modal feedback techniques, without requiring an actual
physical robotic arm during the early phases of ideation, prototyping, and
evaluation. Also, a Robot Operating System (ROS) integration enables the
controlling of a real robotic arm in a PhysicalTwin approach without any
simulation-reality gap. Here, we review the capabilities and limitations of
AdaptiX in detail and present three bodies of research based on the framework.
AdaptiX can be accessed at https://adaptix.robot-research.de.Comment: Accepted submission at The 16th ACM SIGCHI Symposium on Engineering
Interactive Computing Systems (EICS'24
Proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET 2013)
"This book contains the proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET) 2013 which was held on 16.-17.September 2013 in Paphos (Cyprus) in conjunction with the EC-TEL conference. The workshop and hence the proceedings are divided in two parts: on Day 1 the EuroPLOT project and its results are introduced, with papers about the specific case studies and their evaluation. On Day 2, peer-reviewed papers are presented which address specific topics and issues going beyond the EuroPLOT scope. This workshop is one of the deliverables (D 2.6) of the EuroPLOT project, which has been funded from November 2010 – October 2013 by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission through the Lifelong Learning Programme (LLL) by grant #511633. The purpose of this project was to develop and evaluate Persuasive Learning Objects and Technologies (PLOTS), based on ideas of BJ Fogg. The purpose of this workshop is to summarize the findings obtained during this project and disseminate them to an interested audience. Furthermore, it shall foster discussions about the future of persuasive technology and design in the context of learning, education and teaching. The international community working in this area of research is relatively small. Nevertheless, we have received a number of high-quality submissions which went through a peer-review process before being selected for presentation and publication. We hope that the information found in this book is useful to the reader and that more interest in this novel approach of persuasive design for teaching/education/learning is stimulated. We are very grateful to the organisers of EC-TEL 2013 for allowing to host IWEPLET 2013 within their organisational facilities which helped us a lot in preparing this event. I am also very grateful to everyone in the EuroPLOT team for collaborating so effectively in these three years towards creating excellent outputs, and for being such a nice group with a very positive spirit also beyond work. And finally I would like to thank the EACEA for providing the financial resources for the EuroPLOT project and for being very helpful when needed. This funding made it possible to organise the IWEPLET workshop without charging a fee from the participants.
Exploring of Discrete and Continuous Input Control for AI-enhanced Assistive Robotic Arms
Robotic arms, integral in domestic care for individuals with motor
impairments, enable them to perform Activities of Daily Living (ADLs)
independently, reducing dependence on human caregivers. These collaborative
robots require users to manage multiple Degrees-of-Freedom (DoFs) for tasks
like grasping and manipulating objects. Conventional input devices, typically
limited to two DoFs, necessitate frequent and complex mode switches to control
individual DoFs. Modern adaptive controls with feed-forward multi-modal
feedback reduce the overall task completion time, number of mode switches, and
cognitive load. Despite the variety of input devices available, their
effectiveness in adaptive settings with assistive robotics has yet to be
thoroughly assessed. This study explores three different input devices by
integrating them into an established XR framework for assistive robotics,
evaluating them and providing empirical insights through a preliminary study
for future developments.Comment: Companion of the 2024 ACM/IEEE International Conference on
Human-Robot Interactio
Workspace Optimization Techniques to Improve Prediction of Human Motion During Human-Robot Collaboration
Understanding human intentions is critical for safe and effective human-robot
collaboration. While state of the art methods for human goal prediction utilize
learned models to account for the uncertainty of human motion data, that data
is inherently stochastic and high variance, hindering those models' utility for
interactions requiring coordination, including safety-critical or
close-proximity tasks. Our key insight is that robot teammates can deliberately
configure shared workspaces prior to interaction in order to reduce the
variance in human motion, realizing classifier-agnostic improvements in goal
prediction. In this work, we present an algorithmic approach for a robot to
arrange physical objects and project "virtual obstacles" using augmented
reality in shared human-robot workspaces, optimizing for human legibility over
a given set of tasks. We compare our approach against other workspace
arrangement strategies using two human-subjects studies, one in a virtual 2D
navigation domain and the other in a live tabletop manipulation domain
involving a robotic manipulator arm. We evaluate the accuracy of human motion
prediction models learned from each condition, demonstrating that our workspace
optimization technique with virtual obstacles leads to higher robot prediction
accuracy using less training data.Comment: International Conference on Human-Robot Interactio
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics
Viewfinder: final activity report
The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources.
The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation.
The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein
Improving Human-Robot Handover Research by Mixed Reality Techniques
Meyer zu Borgsen S, Renner P, Lier F, Pfeiffer T, Wachsmuth S. Improving Human-Robot Handover Research by Mixed Reality Techniques. In: VAM-HRI 2018. The Inaugural International Workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction. Proceedings. 2018
- …