15 research outputs found
Exploring AI-enhanced Shared Control for an Assistive Robotic Arm
Assistive technologies and in particular assistive robotic arms have the
potential to enable people with motor impairments to live a self-determined
life. More and more of these systems have become available for end users in
recent years, such as the Kinova Jaco robotic arm. However, they mostly require
complex manual control, which can overwhelm users. As a result, researchers
have explored ways to let such robots act autonomously. However, at least for
this specific group of users, such an approach has shown to be futile. Here,
users want to stay in control to achieve a higher level of personal autonomy,
to which an autonomous robot runs counter. In our research, we explore how
Artifical Intelligence (AI) can be integrated into a shared control paradigm.
In particular, we focus on the consequential requirements for the interface
between human and robot and how we can keep humans in the loop while still
significantly reducing the mental load and required motor skills.Comment: Workshop on Engineering Interactive Systems Embedding AI Technologies
(EIS-embedding-AI) at EICS'2
Extending Cobot's Motion Intention Visualization by Haptic Feedback
Nowadays, robots are found in a growing number of areas where they
collaborate closely with humans. Enabled by lightweight materials and safety
sensors, these cobots are gaining increasing popularity in domestic care,
supporting people with physical impairments in their everyday lives. However,
when cobots perform actions autonomously, it remains challenging for human
collaborators to understand and predict their behavior, which is crucial for
achieving trust and user acceptance. One significant aspect of predicting cobot
behavior is understanding their motion intention and comprehending how they
"think" about their actions. Moreover, other information sources often occupy
human visual and audio modalities, rendering them frequently unsuitable for
transmitting such information. We work on a solution that communicates cobot
intention via haptic feedback to tackle this challenge. In our concept, we map
planned motions of the cobot to different haptic patterns to extend the visual
intention feedback.Comment: Final CHI LBW 2023 submission:
https://dx.doi.org/10.1145/3544549.358560
How to Communicate Robot Motion Intent: A Scoping Review
Robots are becoming increasingly omnipresent in our daily lives, supporting
us and carrying out autonomous tasks. In Human-Robot Interaction, human actors
benefit from understanding the robot's motion intent to avoid task failures and
foster collaboration. Finding effective ways to communicate this intent to
users has recently received increased research interest. However, no common
language has been established to systematize robot motion intent. This work
presents a scoping review aimed at unifying existing knowledge. Based on our
analysis, we present an intent communication model that depicts the
relationship between robot and human through different intent dimensions
(intent type, intent information, intent location). We discuss these different
intent dimensions and their interrelationships with different kinds of robots
and human roles. Throughout our analysis, we classify the existing research
literature along our intent communication model, allowing us to identify key
patterns and possible directions for future research.Comment: Interactive Data Visualization of the Paper Corpus:
https://rmi.robot-research.d
AdaptiX -- A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics
With the ongoing efforts to empower people with mobility impairments and the
increase in technological acceptance by the general public, assistive
technologies, such as collaborative robotic arms, are gaining popularity. Yet,
their widespread success is limited by usability issues, specifically the
disparity between user input and software control along the autonomy continuum.
To address this, shared control concepts provide opportunities to combine the
targeted increase of user autonomy with a certain level of computer assistance.
This paper presents the free and open-source AdaptiX XR framework for
developing and evaluating shared control applications in a high-resolution
simulation environment. The initial framework consists of a simulated robotic
arm with an example scenario in Virtual Reality (VR), multiple standard control
interfaces, and a specialized recording/replay system. AdaptiX can easily be
extended for specific research needs, allowing Human-Robot Interaction (HRI)
researchers to rapidly design and test novel interaction methods, intervention
strategies, and multi-modal feedback techniques, without requiring an actual
physical robotic arm during the early phases of ideation, prototyping, and
evaluation. Also, a Robot Operating System (ROS) integration enables the
controlling of a real robotic arm in a PhysicalTwin approach without any
simulation-reality gap. Here, we review the capabilities and limitations of
AdaptiX in detail and present three bodies of research based on the framework.
AdaptiX can be accessed at https://adaptix.robot-research.de.Comment: Accepted submission at The 16th ACM SIGCHI Symposium on Engineering
Interactive Computing Systems (EICS'24
In Time and Space: Towards Usable Adaptive Control for Assistive Robotic Arms
Robotic solutions, in particular robotic arms, are becoming more frequently
deployed for close collaboration with humans, for example in manufacturing or
domestic care environments. These robotic arms require the user to control
several Degrees-of-Freedom (DoFs) to perform tasks, primarily involving
grasping and manipulating objects. Standard input devices predominantly have
two DoFs, requiring time-consuming and cognitively demanding mode switches to
select individual DoFs. Contemporary Adaptive DoF Mapping Controls (ADMCs) have
shown to decrease the necessary number of mode switches but were up to now not
able to significantly reduce the perceived workload. Users still bear the
mental workload of incorporating abstract mode switching into their workflow.
We address this by providing feed-forward multimodal feedback using updated
recommendations of ADMC, allowing users to visually compare the current and the
suggested mapping in real-time. We contrast the effectiveness of two new
approaches that a) continuously recommend updated DoF combinations or b) use
discrete thresholds between current robot movements and new recommendations.
Both are compared in a Virtual Reality (VR) in-person study against a classic
control method. Significant results for lowered task completion time, fewer
mode switches, and reduced perceived workload conclusively establish that in
combination with feedforward, ADMC methods can indeed outperform classic mode
switching. A lack of apparent quantitative differences between Continuous and
Threshold reveals the importance of user-centered customization options.
Including these implications in the development process will improve usability,
which is essential for successfully implementing robotic technologies with high
user acceptance
HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional Cues
In Human-Computer-Interaction, vibrotactile haptic feedback offers the
advantage of being independent of any visual perception of the environment.
Most importantly, the user's field of view is not obscured by user interface
elements, and the visual sense is not unnecessarily strained. This is
especially advantageous when the visual channel is already busy, or the visual
sense is limited. We developed three design variants based on different
vibrotactile illusions to communicate 3D directional cues. In particular, we
explored two variants based on the vibrotactile illusion of the cutaneous
rabbit and one based on apparent vibrotactile motion. To communicate gradient
information, we combined these with pulse-based and intensity-based mapping. A
subsequent study showed that the pulse-based variants based on the vibrotactile
illusion of the cutaneous rabbit are suitable for communicating both
directional and gradient characteristics. The results further show that a
representation of 3D directions via vibrations can be effective and beneficial.Comment: CHI EA '23, April 23-28, 2023, Hamburg, German
My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they “see” the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot’s surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative
Adapt or Perish? Exploring the Effectiveness of Adaptive DoF Control Interaction Methods for Assistive Robot Arms
Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantly when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods
The importance of participatory design for the development of assistive robotic arms : initial approaches and experiences in the research projects MobILe and DoF-Adaptiv
This Article introduces two research projects towards assistive robotic arms for people with severe body impairments. Both projects aim to develop new control and interaction designs to promote accessibility and a better performance for people with functional losses in all four extremities, e.g. due to quadriplegic or multiple sclerosis. The project MobILe concentrates on using a robotic arm as drinking aid and controlling it with smart glasses, eye-tracking and augmented reality. A user oriented development process with participatory methods were pursued which brought new knowledge about the life and care situation of the future target group and the requirements a robotic drinking aid needs to meet. As a consequence the new project DoF-Adaptiv follows an even more participatory approach, including the future target group, their family and professional caregivers from the beginning into decision making and development processes within the project. DoF-Adaptiv aims to simplify the control modalities of assistive robotic arms to enhance the usability of the robotic arm for activities of daily living. lo decide on exemplary activities, like eating or open a door, the future target group, their family and professional caregivers are included in the decision making process. Furthermore all relevant stakeholders will be included in the investigation of ethical, legal and social implications as well as the identification of potential risks. This article will show the importance of the participatory design for the development and research process in MobILe and DoF-Adaptiv