1,061 research outputs found
Cooperative Navigation for Mixed Human–Robot Teams Using Haptic Feedback
In this paper, we present a novel cooperative navigation control for human–robot teams. Assuming that a human wants to reach a final location in a large environment with the help of a mobile robot, the robot must steer the human from the initial to the target position. The challenges posed by cooperative human–robot navigation are typically addressed by using haptic feedback via physical interaction. In contrast with that, in this paper, we describe a different approach, in which the human–robot interaction is achieved via wearable vibrotactile armbands. In the proposed work, the subject is free to decide her/his own pace. A warning vibrational signal is generated by the haptic armbands when a large deviation with respect to the desired pose is detected by the robot. The proposed method has been evaluated in a large indoor environment, where 15 blindfolded human subjects were asked to follow the haptic cues provided by the robot. The participants had to reach a target area, while avoiding static and dynamic obstacles. Experimental results revealed that the blindfolded subjects were able to avoid the obstacles and safely reach the target in all of the performed trials. A comparison is provided between the results obtained with blindfolded users and experiments performed with sighted people
Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks
Social intelligence is an important requirement for enabling robots to
collaborate with people. In particular, human path prediction is an essential
capability for robots in that it prevents potential collision with a human and
allows the robot to safely make larger movements. In this paper, we present a
method for predicting the trajectory of a human who follows a haptic robotic
guide without using sight, which is valuable for assistive robots that aid the
visually impaired. We apply a deep learning method based on recurrent neural
networks using multimodal data: (1) human trajectory, (2) movement of the
robotic guide, (3) haptic input data measured from the physical interaction
between the human and the robot, (4) human depth data. We collected actual
human trajectory and multimodal response data through indoor experiments. Our
model outperformed the baseline result while using only the robot data with the
observed human trajectory, and it shows even better results when using
additional haptic and depth data.Comment: 6 pages, Submitted to IEEE World Haptics Conference 201
Haptic Interaction with a Guide Robot in Zero Visibility
Search and rescue operations are often undertaken in dark and noisy environment in which rescue team must rely on haptic feedback for exploration and safe exit. However, little attention has been paid specifically to haptic sensitivity in such contexts or the possibility of enhancing communicational proficiency in the haptic mode as a life-preserving measure. The potential of root swarms for search and rescue has been shown by the Guardians project (EU, 2006-2010); however the project also showed the problem of human robot interaction in smoky (non-visibility) and noisy conditions. The REINS project (UK, 2011-2015) focused on human robot interaction in such conditions. This research is a body of work (done as a part of he REINS project) which investigates the haptic interaction of a person wit a guide robot in zero visibility. The thesis firstly reflects upon real world scenarios where people make use of the haptic sense to interact in zero visibility (such as interaction among firefighters and symbiotic relationship between visually impaired people and guide dogs). In addition, it reflects on the sensitivity and trainability of the haptic sense, to be used for the interaction. The thesis presents an analysis and evaluation of the design of a physical interface (Designed by the consortium of the REINS project) connecting the human and the robotic guide in poor visibility conditions. Finally, it lays a foundation for the design of test cases to evaluate human robot haptic interaction, taking into consideration the two aspects of the interaction, namely locomotion guidance and environmental exploration
Trust-Based Control of (Semi)Autonomous Mobile Robotic Systems
Despite great achievements made in (semi)autonomous robotic systems, human participa-tion is still an essential part, especially for decision-making about the autonomy allocation of robots in complex and uncertain environments. However, human decisions may not be optimal due to limited cognitive capacities and subjective human factors. In human-robot interaction (HRI), trust is a major factor that determines humans use of autonomy. Over/under trust may lead to dispro-portionate autonomy allocation, resulting in decreased task performance and/or increased human workload. In this work, we develop automated decision-making aids utilizing computational trust models to help human operators achieve a more effective and unbiased allocation. Our proposed decision aids resemble the way that humans make an autonomy allocation decision, however, are unbiased and aim to reduce human workload, improve the overall performance, and result in higher acceptance by a human. We consider two types of autonomy control schemes for (semi)autonomous mobile robotic systems. The first type is a two-level control scheme which includes switches between either manual or autonomous control modes. For this type, we propose automated decision aids via a computational trust and self-confidence model. We provide analytical tools to investigate the steady-state effects of the proposed autonomy allocation scheme on robot performance and human workload. We also develop an autonomous decision pattern correction algorithm using a nonlinear model predictive control to help the human gradually adapt to a better allocation pattern. The second type is a mixed-initiative bilateral teleoperation control scheme which requires mixing of autonomous and manual control. For this type, we utilize computational two-way trust models. Here, mixed-initiative is enabled by scaling the manual and autonomous control inputs with a function of computational human-to-robot trust. The haptic force feedback cue sent by the robot is dynamically scaled with a function of computational robot-to-human trust to reduce humans physical workload. Using the proposed control schemes, our human-in-the-loop tests show that the trust-based automated decision aids generally improve the overall robot performance and reduce the operator workload compared to a manual allocation scheme. The proposed decision aids are also generally preferred and trusted by the participants. Finally, the trust-based control schemes are extended to the single-operator-multi-robot applications. A theoretical control framework is developed for these applications and the stability and convergence issues under the switching scheme between different robots are addressed via passivity based measures
Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network
Training a robot that engages with people is challenging, because it is
expensive to involve people in a robot training process requiring numerous data
samples. This paper proposes a human path prediction network (HPPN) and an
evolution strategy-based robot training method using virtual human movements
generated by the HPPN, which compensates for this sample inefficiency problem.
We applied the proposed method to the training of a robotic guide for visually
impaired people, which was designed to collect multimodal human response data
and reflect such data when selecting the robot's actions. We collected 1,507
real-world episodes for training the HPPN and then generated over 100,000
virtual episodes for training the robot policy. User test results indicate that
our trained robot accurately guides blindfolded participants along a goal path.
In addition, by the designed reward to pursue both guidance accuracy and human
comfort during the robot policy training process, our robot leads to improved
smoothness in human motion while maintaining the accuracy of the guidance. This
sample-efficient training method is expected to be widely applicable to all
robots and computing machinery that physically interact with humans
Trajectory Deformations from Physical Human-Robot Interaction
Robots are finding new applications where physical interaction with a human
is necessary: manufacturing, healthcare, and social tasks. Accordingly, the
field of physical human-robot interaction (pHRI) has leveraged impedance
control approaches, which support compliant interactions between human and
robot. However, a limitation of traditional impedance control is that---despite
provisions for the human to modify the robot's current trajectory---the human
cannot affect the robot's future desired trajectory through pHRI. In this
paper, we present an algorithm for physically interactive trajectory
deformations which, when combined with impedance control, allows the human to
modulate both the actual and desired trajectories of the robot. Unlike related
works, our method explicitly deforms the future desired trajectory based on
forces applied during pHRI, but does not require constant human guidance. We
present our approach and verify that this method is compatible with traditional
impedance control. Next, we use constrained optimization to derive the
deformation shape. Finally, we describe an algorithm for real time
implementation, and perform simulations to test the arbitration parameters.
Experimental results demonstrate reduction in the human's effort and
improvement in the movement quality when compared to pHRI with impedance
control alone
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
A Hierarchical Variable Autonomy Mixed-Initiative Framework for Human-Robot Teaming in Mobile Robotics
This paper presents a Mixed-Initiative (MI) framework for addressing the
problem of control authority transfer between a remote human operator and an AI
agent when cooperatively controlling a mobile robot. Our Hierarchical
Expert-guided Mixed-Initiative Control Switcher (HierEMICS) leverages
information on the human operator's state and intent. The control switching
policies are based on a criticality hierarchy. An experimental evaluation was
conducted in a high-fidelity simulated disaster response and remote inspection
scenario, comparing HierEMICS with a state-of-the-art Expert-guided
Mixed-Initiative Control Switcher (EMICS) in the context of mobile robot
navigation. Results suggest that HierEMICS reduces conflicts for control
between the human and the AI agent, which is a fundamental challenge in both
the MI control paradigm and also in the related shared control paradigm.
Additionally, we provide statistically significant evidence of improved,
navigational safety (i.e., fewer collisions), LOA switching efficiency, and
conflict for control reduction.Comment: 6 pages, 4 figures, ICHMS 2022, First two Authors contributed equall
- …