56 research outputs found

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Enhancing tele-operation - Investigating the effect of sensory feedback on performance

    Get PDF
    The decline in the number of healthcare service providers in comparison to the growing numbers of service users prompts the development of technologies to improve the efficiency of healthcare services. One such technology which could offer support are assistive robots, remotely tele-operated to provide assistive care and support for older adults with assistive care needs and people living with disabilities. Tele-operation makes it possible to provide human-in-the-loop robotic assistance while also addressing safety concerns in the use of autonomous robots around humans. Unlike many other applications of robot tele-operation, safety is particularly significant as the tele-operated assistive robots will be used in close proximity to vulnerable human users. It is therefore important to provide as much information about the robot (and the robot workspace) as possible to the tele-operators to ensure safety, as well as efficiency. Since robot tele-operation is relatively unexplored in the context of assisted living, this thesis explores different feedback modalities that may be employed to communicate sensor information to tele-operators. The thesis presents research as it transitioned from identifying and evaluating additional feedback modalities that may be used to supplement video feedback, to exploring different strategies for communicating the different feedback modalities. Due to the fact that some of the sensors and feedback needed are not readily available, different design iterations were carried out to develop the necessary hardware and software for the studies carried out. The first human study was carried out to investigate the effect of feedback on tele-operator performance. Performance was measured in terms of task completion time, ease of use of the system, number of robot joint movements, and success or failure of the task. The effect of verbal feedback between the tele-operator and service users was also investigated. Feedback modalities have differing effects on performance metrics and as a result, the choice of optimal feedback may vary from task to task. Results show that participants preferred scenarios with verbal feedback relative to scenarios without verbal feedback, which also reflects in their performance. Gaze metrics from the study also showed that it may be possible to understand how tele-operators interact with the system based on their areas of interest as they carry out tasks. This findings suggest that such studies can be used to improve the design of tele-operation systems.The need for social interaction between the tele-operator and service user suggests that visual and auditory feedback modalities will be engaged as tasks are carried out. This further reduces the number of available sensory modalities through which information can be communicated to tele-operators. A wrist-worn Wi-Fi enabled haptic feedback device was therefore developed and a study was carried out to investigate haptic sensitivities across the wrist. Results suggest that different locations on the wrist have varying sensitivities to haptic stimulation with and without video distraction, duration of haptic stimulation, and varying amplitudes of stimulation. This suggests that dynamic control of haptic feedback can be used to improve haptic perception across the wrist, and it may also be possible to display more than one type of sensor data to tele-operators during a task. The final study carried out was designed to investigate if participants can differentiate between different types of sensor data conveyed through different locations on the wrist via haptic feedback. The effect of increased number of attempts on performance was also investigated. Total task completion time decreased with task repetition. Participants with prior gaming and robot experience had a more significant reduction in total task completion time when compared to participants without prior gaming and robot experience. Reduction in task completion time was noticed for all stages of the task but participants with additional feedback had higher task completion time than participants without supplementary feedback. Reduction in task completion time varied for different stages of the task. Even though gripper trajectory reduced with task repetition, participants with supplementary feedback had longer gripper trajectories than participants without supplementary feedback, while participants with prior gaming experience had shorter gripper trajectories than participants without prior gaming experience. Perceived workload was also found to reduce with task repetition but perceived workload was higher for participants with feedback reported higher perceived workload than participants without feedback. However participants without feedback reported higher frustration than participants without feedback.Results show that the effect of feedback may not be significant where participants can get necessary information from video feedback. However, participants were fully dependent on feedback when video feedback could not provide requisite information needed.The findings presented in this thesis have potential applications in healthcare, and other applications of robot tele-operation and feedback. Findings can be used to improve feedback designs for tele-operation systems to ensure safe and efficient tele-operation. The thesis also provides ways visual feedback can be used with other feedback modalities. The haptic feedback designed in this research may also be used to provide situational awareness for the visually impaired

    An optimization-based formalism for shared autonomy in dynamic environments

    Get PDF
    Teleoperation is an integral component of various industrial processes. For example, concrete spraying, assisted welding, plastering, inspection, and maintenance. Often these systems implement direct control that maps interface signals onto robot motions. Successful completion of tasks typically requires high levels of manual dexterity and cognitive load. In addition, the operator is often present nearby dangerous machinery. Consequently, safety is of critical importance and training is expensive and prolonged -- in some cases taking several months or even years. An autonomous robot replacement would be an ideal solution since the human could be removed from danger and training costs significantly reduced. However, this is currently not possible due to the complexity and unpredictability of the environments, and the levels of situational and contextual awareness required to successfully complete these tasks. In this thesis, the limitations of direct control are addressed by developing methods for shared autonomy. A shared autonomous approach combines human input with autonomy to generate optimal robot motions. The approach taken in this thesis is to formulate shared autonomy within an optimization framework that finds optimized states and controls by minimizing a cost function, modeling task objectives, given a set of (changing) physical and operational constraints. Online shared autonomy requires the human to be continuously interacting with the system via an interface (akin to direct control). The key challenges addressed in this thesis are: 1) ensuring computational feasibility (such a method should be able to find solutions fast enough to achieve a sampling frequency bound below by 40Hz), 2) being reactive to changes in the environment and operator intention, 3) knowing how to appropriately blend operator input and autonomy, and 4) allowing the operator to supply input in an intuitive manner that is conducive to high task performance. Various operator interfaces are investigated with regards to the control space, called a mode of teleoperation. Extensive evaluations were carried out to determine for which modes are most intuitive and lead to highest performance in target acquisition tasks (e.g. spraying/welding/etc). Our performance metrics quantified task difficulty based on Fitts' law, as well as a measure of how well constraints affecting the task performance were met. The experimental evaluations indicate that higher performance is achieved when humans submit commands in low-dimensional task spaces as opposed to joint space manipulations. In addition, our multivariate analysis indicated that those with regular exposure to computer games achieved higher performance. Shared autonomy aims to relieve human operators of the burden of precise motor control, tracking, and localization. An optimization-based representation for shared autonomy in dynamic environments was developed. Real-time tractability is ensured by modulating the human input with information of the changing environment within the same task space, instead of adding it to the optimization cost or constraints. The method was illustrated with two real world applications: grasping objects in cluttered environments and spraying tasks requiring sprayed linings with greater homogeneity. Maintaining motion patterns -- referred to as skills -- is often an integral part of teleoperation for various industrial processes (e.g. spraying, welding, plastering). We develop a novel model-based shared autonomous framework for incorporating the notion of skill assistance to aid operators to sustain these motion patterns whilst adhering to environment constraints. In order to achieve computational feasibility, we introduce a novel parameterization for state and control that combines skill and underlying trajectory models, leveraging a special type of curve known as Clothoids. This new parameterization allows for efficient computation of skill-based short term horizon plans, enabling the use of a model predictive control loop. Our hardware realization validates the effectiveness of our method to recognize a change of intended skill, and showing an improved quality of output motion, even under dynamically changing obstacles. In addition, extensions of the work to supervisory control are described. An exploratory study presents an approach that improves computational feasibility for complex tasks with minimal interactive effort on the part of the human. Adaptations are theorized which might allow such a method to be applicable and beneficial to high degree of freedom systems. Finally, a system developed in our lab is described that implements sliding autonomy and shown to complete multi-objective tasks in complex environments with minimal interaction from the human

    A Haptic Shared-Control Architecture for Guided Multi-Target Robotic Grasping

    Get PDF
    Although robotic telemanipulation has always been a key technology for the nuclear industry, little advancement has been seen over the last decades. Despite complex remote handling requirements, simple mechanically linked master-slave manipulators still dominate the field. Nonetheless, there is a pressing need for more effective robotic solutions able to significantly speed up the decommissioning of legacy radioactive waste. This paper describes a novel haptic shared-control approach for assisting a human operator in the sort and segregation of different objects in a cluttered and unknown environment. A three-dimensional scan of the scene is used to generate a set of potential grasp candidates on the objects at hand. These grasp candidates are then used to generate guiding haptic cues, which assist the operator in approaching and grasping the objects. The haptic feedback is designed to be smooth and continuous as the user switches from a grasp candidate to the next one, or from one object to another one, avoiding any discontinuity or abrupt changes. To validate our approach, we carried out two human-subject studies, enrolling 15 participants. We registered an average improvement of 20.8%, 20.1%, and 32.5% in terms of completion time, linear trajectory, and perceived effectiveness, respectively, between the proposed approach and standard teleoperation
    • …
    corecore