145 research outputs found

    A 360 VR and Wi-Fi Tracking Based Autonomous Telepresence Robot for Virtual Tour

    Get PDF
    This study proposes a novel mobile robot teleoperation interface that demonstrates the applicability of a robot-aided remote telepresence system with a virtual reality (VR) device to a virtual tour scenario. To improve realism and provide an intuitive replica of the remote environment for the user interface, the implemented system automatically moves a mobile robot (viewpoint) while displaying a 360-degree live video streamed from the robot to a VR device (Oculus Rift). Upon the user choosing a destination location from a given set of options, the robot generates a route based on a shortest path graph and travels along that the route using a wireless signal tracking method that depends on measuring the direction of arrival (DOA) of radio signals. This paper presents an overview of the system and architecture, and discusses its implementation aspects. Experimental results show that the proposed system is able to move to the destination stably using the signal tracking method, and that at the same time, the user can remotely control the robot through the VR interface

    Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task

    Get PDF
    Current approaches do not allow robots to execute a task and simultaneously convey emotions to users using their body motions. This paper explores the capabilities of the Jacobian null space of a humanoid robot to convey emotions. A task priority formulation has been implemented in a Pepper robot which allows the specification of a primary task (waving gesture, transportation of an object, etc.) and exploits the kinematic redundancy of the robot to convey emotions to humans as a lower priority task. The emotions, defined by Mehrabian as points in the pleasure–arousal–dominance space, generate intermediate motion features (jerkiness, activity and gaze) that carry the emotional information. A map from this features to the joints of the robot is presented. A user study has been conducted in which emotional motions have been shown to 30 participants. The results show that happiness and sadness are very well conveyed to the user, calm is moderately well conveyed, and fear is not well conveyed. An analysis on the dependencies between the motion features and the emotions perceived by the participants shows that activity correlates positively with arousal, jerkiness is not perceived by the user, and gaze conveys dominance when activity is low. The results indicate a strong influence of the most energetic motions of the emotional task and point out new directions for further research. Overall, the results show that the null space approach can be regarded as a promising mean to convey emotions as a lower priority task.Postprint (author's final draft

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Combining motion planning with social reward sources for collaborative human-robot navigation task design

    Get PDF
    Across the human history, teamwork is one of the main pillars sustaining civilizations and technology development. In consequence, as the world embraces omatization, human-robot collaboration arises naturally as a cornerstone. This applies to a huge spectrum of tasks, most of them involving navigation. As a result, tackling pure collaborative navigation tasks can be a good first foothold for roboticists in this enterprise. In this thesis, we define a useful framework for knowledge representation in human-robot collaborative navigation tasks and propose a first solution to the human-robot collaborative search task. After validating the model, two derived projects tackling its main weakness are introduced: the compilation of a human search dataset and the implementation of a multi-agent planner for human-robot navigatio

    Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives

    Full text link
    Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi

    Human factors issues in telerobotic decommissioning of legacy nuclear facilities

    Get PDF
    This thesis investigates the problems of enabling human workers to control remote robots, to achieve decommissioning of contaminated nuclear facilities, which are hazardous for human workers to enter. The mainstream robotics literature predominantly reports novel mechanisms and novel control algorithms. In contrast, this thesis proposes experimental methodologies for objectively evaluating the performance of both a robot and its remote human operator, when challenged with carrying out industrially relevant remote manipulation tasks. Initial experiments use a variety of metrics to evaluate the performance of human test-subjects. Results show that: conventional telemanipulation is extremely slow and difficult; metrics for usability of such technology can be conflicting and hard to interpret; aptitude for telemanipulation varies significantly between individuals; however such aptitude may be rendered predictable by using simple spatial awareness tests. Additional experiments suggest that autonomous robotics methods (e.g. vision-guided grasping) can significantly assist the operator. A novel approach to telemanipulation is proposed, in which an ``orbital camera`` enables the human operator to select arbitrary views of the scene, with the robot's motions transformed into the orbital view coordinate frame. This approach is useful for overcoming the severe depth perception problems of conventional fixed camera views. Finally, a novel computer vision algorithm is proposed for target tracking. Such an algorithm could be used to enable an unmanned aerial vehicle (UAV) to fixate on part of the workspace, e.g. a manipulated object, to provide the proposed orbital camera view

    Development of an Augmented Reality Interface for Intuitive Robot Programming

    Get PDF
    As the demand for advanced robotic systems continues to grow, the need for new technologies and techniques that can improve the efficiency and effectiveness of robot programming is imperative. The latter relies heavily on the effective communication of tasks between the user and the robot. To address this issue, we developed an Augmented Reality (AR) interface that incorporates Head Mounted Display (HMD) capabilities, and integrated it with an active learning framework for intuitive programming of robots. This integration enables the execution of conditional tasks, bridging the gap between user and robot knowledge. The active learning model with the user's guidance incrementally programs a complex task and after encoding the skills, generates a high level task graph. Then the holographic robot is visualising individual skills of the task in order to increase the user's intuition of the whole procedure with sensory information retrieved from the physical robot in real-time. The interactive aspect of the interface can be utilised in this phase, by providing the user the option of actively validating the learnt skills or potentially changing them and thus generating a new skill sequence. Teaching the real robot through teleoperation by using the HMD is also possible for the user to increase the directness and immersion factors of teaching procedure while safely manipulating the physical robot from a distance. The evaluation of the proposed framework is conducted through a series of experiments employing the developed interface on the real system. These experiments aim to assess the degree of intuitiveness provided by the interface features to the user and to determine the extent of similarity between the virtual system's behavior during the robot programming procedure and that of its physical counterpart

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models
    • …
    corecore