48 research outputs found

    Trust-Based Control of (Semi)Autonomous Mobile Robotic Systems

    Get PDF
    Despite great achievements made in (semi)autonomous robotic systems, human participa-tion is still an essential part, especially for decision-making about the autonomy allocation of robots in complex and uncertain environments. However, human decisions may not be optimal due to limited cognitive capacities and subjective human factors. In human-robot interaction (HRI), trust is a major factor that determines humans use of autonomy. Over/under trust may lead to dispro-portionate autonomy allocation, resulting in decreased task performance and/or increased human workload. In this work, we develop automated decision-making aids utilizing computational trust models to help human operators achieve a more eïŹ€ective and unbiased allocation. Our proposed decision aids resemble the way that humans make an autonomy allocation decision, however, are unbiased and aim to reduce human workload, improve the overall performance, and result in higher acceptance by a human. We consider two types of autonomy control schemes for (semi)autonomous mobile robotic systems. The ïŹrst type is a two-level control scheme which includes switches between either manual or autonomous control modes. For this type, we propose automated decision aids via a computational trust and self-conïŹdence model. We provide analytical tools to investigate the steady-state eïŹ€ects of the proposed autonomy allocation scheme on robot performance and human workload. We also develop an autonomous decision pattern correction algorithm using a nonlinear model predictive control to help the human gradually adapt to a better allocation pattern. The second type is a mixed-initiative bilateral teleoperation control scheme which requires mixing of autonomous and manual control. For this type, we utilize computational two-way trust models. Here, mixed-initiative is enabled by scaling the manual and autonomous control inputs with a function of computational human-to-robot trust. The haptic force feedback cue sent by the robot is dynamically scaled with a function of computational robot-to-human trust to reduce humans physical workload. Using the proposed control schemes, our human-in-the-loop tests show that the trust-based automated decision aids generally improve the overall robot performance and reduce the operator workload compared to a manual allocation scheme. The proposed decision aids are also generally preferred and trusted by the participants. Finally, the trust-based control schemes are extended to the single-operator-multi-robot applications. A theoretical control framework is developed for these applications and the stability and convergence issues under the switching scheme between diïŹ€erent robots are addressed via passivity based measures

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Improving Operator Recognition and Prediction of Emergent Swarm Behaviors

    Get PDF
    Robot swarms are typically defined as large teams of coordinating robots that interact with each other on a local scale. The control laws that dictate these interactions are often designed to produce emergent global behaviors useful for robot teams, such as aggregating at a single location or moving between locations as a group. These behaviors are called emergent because they arise from the local rules governing each robot as they interact with neighbors and the environment. No single robot is aware of the global behavior yet they all take part in it, which allows for a robustness that is difficult to achieve with explicitly-defined global plans. Now that hardware and algorithms for swarms have progressed enough to allow for their use outside the laboratory, new research is focused on how operators can control them. Recent work has introduced new paradigms for imparting an operator's intent on the swarm, yet little work has focused on how to better visualize the swarm to improve operator prediction and control of swarm states. The goal of this dissertation is to investigate how to present the limited data from a swarm to an operator so as to maximize their understanding of the current behavior and swarm state in general. This dissertation develops--through user studies--new methods of displaying the state of a swarm that improve a user's ability to recognize, predict, and control emergent behaviors. The general conclusion is that how summary information about the swarm is displayed has a significant impact on the ability of users to interact with the swarm, and that future work should focus on the properties unique to swarms when developing visualizations for human-swarm interaction tasks

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Human-Machine Cooperative Decision Making

    Get PDF
    The research reported in this thesis focuses on the decision making aspect of human-machine cooperation and reveals new insights from theoretical modeling to experimental evaluations: Two mathematical behavior models of two emancipated cooperation partners in a cooperative decision making process are introduced. The model-based automation designs are experimentally evaluated and thereby demonstrate their benefits compared to state-of-the-art approaches

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions

    Human-Machine Cooperative Decision Making

    Get PDF
    Diese Dissertation beschĂ€ftigt sich mit der gemeinsamen Entscheidungsfindung in der Mensch-Maschine-Kooperation und liefert neue Erkenntnisse, welche von der theoretischen Modellierung bis zu experimentellen Untersuchungen reichen. ZunĂ€chst wird eine methodische Klassifikation bestehender Forschung zur Mensch-Maschine-Kooperation vorgenommen und der Forschungsfokus dieser Dissertation mithilfe eines vorgestellten Taxonomiemodells der Mensch-Maschine-Kooperation, dem Butterfly-Modell, abgegrenzt. Darauffolgend stellt die Dissertation zwei mathematische Verhaltensmodelle der gemeinsamen Entscheidungsfindung von Mensch und Maschine vor: das Adaptive Verhandlungsmodell und den n-stufigen War of Attrition. Beide modellieren den Einigungsprozess zweier emanzipierter Kooperationspartner und unterscheiden sich hinsichtlich ihrer UrsprĂŒnge, welche in der Verhandlungs- beziehungsweise Spieltheorie liegen. ZusĂ€tzlich wird eine Studie vorgestellt, die die Eignung der vorgeschlagenen mathematischen Modelle zur Beschreibung des menschlichen Nachgebeverhaltens in kooperativen Entscheidungsfindungs-Prozessen nachweist. Darauf aufbauend werden zwei modellbasierte Automationsdesigns bereitgestellt, welche die Entwicklung von Maschinen ermöglichen, die an einem Einigungsprozess mit einem Menschen teilnehmen können. Zuletzt werden zwei experimentelle Untersuchungen der vorgeschlagenen Automationsdesigns im Kontext von teleoperierten mobilen Robotern in Such- und Rettungsszenarien und anhand einer Anwendung in einem hochautomatisierten Fahrzeug prĂ€sentiert. Die experimentellen Ergebnisse liefern empirische Evidenz fĂŒr die Überlegenheit der vorgestellten modellbasierten Automationsdesigns gegenĂŒber den bisherigen AnsĂ€tzen in den Aspekten der objektiven kooperativen Performanz, des menschlichen Vertrauens in die Interaktion mit der Maschine und der Nutzerzufriedenheit. So zeigt diese Dissertation, dass Menschen eine emanzipierte Interaktion mit Bezug auf die Entscheidungsfindung bevorzugen, und leistet einen wertvollen Beitrag zur vollumfĂ€nglichen Betrachtung und Verwirklichung von Mensch-Maschine-Kooperationen

    Human-Machine Cooperative Decision Making

    Get PDF
    The research reported in this thesis focuses on the decision making aspect of human-machine cooperation and reveals new insights from theoretical modeling to experimental evaluations: Two mathematical behavior models of two emancipated cooperation partners in a cooperative decision making process are introduced. The model-based automation designs are experimentally evaluated and thereby demonstrate their benefits compared to state-of-the-art approaches

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    Trust-Based Control of Robotic Manipulators in Collaborative Assembly in Manufacturing

    Get PDF
    Human-robot interaction (HRI) is vastly addressed in the field of automation and manufacturing. Most of the HRI literature in manufacturing explored physical human-robot interaction (pHRI) and invested in finding means for ensuring safety and optimized effort sharing amongst a team of humans and robots. The recent emergence of safe, lightweight, and human-friendly robots has opened a new realm for human-robot collaboration (HRC) in collaborative manufacturing. For such robots with the new HRI functionalities to interact closely and effectively with a human coworker, new human-centered controllers that integrate both physical and social interaction are demanded. Social human-robot interaction (sHRI) has been demonstrated in robots with affective abilities in education, social services, health care, and entertainment. Nonetheless, sHRI should not be limited only to those areas. In particular, we focus on human trust in robot as a basis of social interaction. Human trust in robot and robot anthropomorphic features have high impacts on sHRI. Trust is one of the key factors in sHRI and a prerequisite for effective HRC. Trust characterizes the reliance and tendency of human in using robots. Factors within a robotic system (e.g. performance, reliability, or attribute), the task, and the surrounding environment can all impact the trust dynamically. Over-reliance or under-reliance might occur due to improper trust, which results in poor team collaboration, and hence higher task load and lower overall task performance. The goal of this dissertation is to develop intelligent control algorithms for the manipulator robots that integrate both physical and social HRI factors in the collaborative manufacturing. First, the evolution of human trust in a collaborative robot model is identified and verified through a series of human-in-the-loop experiments. This model serves as a computational trust model estimating an objective criterion for the evolution of human trust in robot rather than estimating an individual\u27s actual level of trust. Second, an HRI-based framework is developed for controlling the speed of a robot performing pick and place tasks. The impact of the consideration of the different level of interaction in the robot controller on the overall efficiency and HRI criteria such as human perceived workload and trust and robot usability is studied using a series of human-in-the-loop experiments. Third, an HRI-based framework is developed for planning and controlling the robot motion in performing hand-over tasks to the human. Again, series of human-in-the-loop experimental studies are conducted to evaluate the impact of implementation of the frameworks on overall efficiency and HRI criteria such as human workload and trust and robot usability. Finally, another framework is proposed for the cooperative manipulation of a common object by a team of a human and a robot. This framework proposes a trust-based role allocation strategy for adjusting the proactive behavior of the robot performing a cooperative manipulation task in HRC scenarios. For the mentioned frameworks, the results of the experiments show that integrating HRI in the robot controller leads to a lower human workload while it maintains a threshold level of human trust in robot and does not degrade robot usability and efficiency
    corecore