938 research outputs found
Recommended from our members
Light-based nonverbal signaling with passive demonstrations for mobile service robots
With emerging applications in robotics that have the potential to bring them into our daily lives, it is expected for them to not only operate in close proximity to humans but also interact with them as well. When operating in crowded, human-populated environments there are many communication challenges faced by robots due to variable levels of interactions (e.g. asking for help, giving information, or navigating near humans). A crucial factor for success in these interactions is a robot’s ability to express information about their intent, actions, and knowledge to co-located humans. Many of the robot platforms developed for service roles have non-anthropomorphic form factors in order to simplify and tailor them to their jobs. Due to a lack of anthropomorphic features, these types of robots primarily communicate using an on-screen display and/or spoken language. To overcome the limitation of not communicating as people do, we explore the viability of nonverbal light-based signals as a communication modality for mobile service robots. These types of signals have many benefits over existing modalities which they can either complement or replace when appropriate, such as having long-range visibility and persisting over time. We present a novel light-based signal control architecture implemented as a custom Robot Operating System (ROS) software package generalized to allow for various signal implementations. We implement our framework on a BWIBot, an autonomous mobile service robot created as part of the Building-Wide Intelligence Project, and evaluate its validity through a real-world user study on the scenario where a robot and human are traversing a shared corridor from opposite ends, and the potential conflict created when their paths meet. Our results demonstrate that exposing users to the robot’s use of an animated light signal only once prior to when it is information critical for the user is sufficient to disambiguate its meaning, and thus greatly enhances its utility in-situ, with no direct instruction or training to the user. These findings suggest a paradigm of passive demonstration of light-based signals in future applications.Computer Science
Extending Cobot's Motion Intention Visualization by Haptic Feedback
Nowadays, robots are found in a growing number of areas where they
collaborate closely with humans. Enabled by lightweight materials and safety
sensors, these cobots are gaining increasing popularity in domestic care,
supporting people with physical impairments in their everyday lives. However,
when cobots perform actions autonomously, it remains challenging for human
collaborators to understand and predict their behavior, which is crucial for
achieving trust and user acceptance. One significant aspect of predicting cobot
behavior is understanding their motion intention and comprehending how they
"think" about their actions. Moreover, other information sources often occupy
human visual and audio modalities, rendering them frequently unsuitable for
transmitting such information. We work on a solution that communicates cobot
intention via haptic feedback to tackle this challenge. In our concept, we map
planned motions of the cobot to different haptic patterns to extend the visual
intention feedback.Comment: Final CHI LBW 2023 submission:
https://dx.doi.org/10.1145/3544549.358560
Explainable shared control in assistive robotics
Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency.
There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference.
Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent.
This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces
Mediating Human-Robot Collaboration through Mixed Reality Cues
abstract: This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. It was hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, an experiment involving human subjects was conducted and the performance (both objective and subjective) of the presented system was compared with a conventional method based on printed instructions. It was found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.Dissertation/ThesisMasters Thesis Electrical Engineering 201
A Multi-Agent Control Architecture for a Robotic Wheelchair
Abstract: Assistant robots like robotic wheelchairs can perform an effective and valuable work in our daily lives. However, they eventually may need external help from humans in the robot environment (particularly, the driver in the case of a wheelchair) to accomplish safely and efficiently some tricky tasks for the current technology, i.e. opening a locked door, traversing a crowded area, etc. This article proposes a control architecture for assistant robots designed under a multi-agent perspective that facilitates the participation of humans into the robotic system and improves the overall performance of the robot as well as its dependability. Within our design, agents have their own intentions and beliefs, have different abilities (that include algorithmic behaviours and human skills) and also learn autonomously the most convenient method to carry out their actions through reinforcement learning. The proposed architecture is illustrated with a real assistant robot: a robotic wheelchair that provides mobility to impaired or elderly people
- …