120 research outputs found
Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study
© 2019, The Author(s). This work presents a user-study evaluation of various visual and haptic feedback modes on a real telemanipulation platform. Of particular interest is the potential for haptic guidance virtual fixtures and 3D-mapping techniques to enhance efficiency and awareness in a simple teleoperated valve turn task. An RGB-Depth camera is used to gather real-time color and geometric data of the remote scene, and the operator is presented with either a monocular color video stream, a 3D-mapping voxel representation of the remote scene, or the ability to place a haptic guidance virtual fixture to help complete the telemanipulation task. The efficacy of the feedback modes is then explored experimentally through a user study, and the different modes are compared on the basis of objective and subjective metrics. Despite the simplistic task and numerous evaluation metrics, results show that the haptic virtual fixture resulted in significantly better collision avoidance compared to 3D visualization alone. Anticipated performance enhancements were also observed moving from 2D to 3D visualization. Remaining comparisons lead to exploratory inferences that inform future direction for focused and statistically significant studies
Visuo-haptic Command Interface for Control-Architecture Adaptable Teleoperation
Robotic teleoperation is the commanding of a remote robot. Depending on the operator's involvement required by a teleoperation task, the remote site is more or less autonomous. On the operator site, input and display devices record and present control-related information from and to the operator respectively. Kinaesthetic devices stimulate haptic senses, thus conveying information through the sensing of displacement, velocity and acceleration within muscles, tendons and joints. These devices have shown to excel in tasks with low autonomy while touch-screen based devices are beneficial in highly autonomous tasks. However, neither perform reliably over a broad range.
This thesis examines the feasibility of the 'Motion Console Application for Novel Virtual, Augmented and Avatar Systems' (Motion CANVAAS) that unifies the input/display capabilities of kinaesthetic and visual touchscreen-based devices in order to bridge this gap. This work describes the design of the Motion CANVAAS, its construction, development and conducts an initial validation process. The Motion CANVAAS was evaluated via two pilot studies, each based on a different virtual environment: a modified Tetris application and a racing karts simulator. The target research variables were the coupling of input/display capabilities and the effect of the application-specific kinaesthetic feedback. Both studies proved the concept to be a viable solution as haptic input/output device and indicated potential advantages over current solutions. On the flip side, some of the system's limitations could be identified. With the insight gained from this work, the benefits as well as the limitations will be addressed in the future research. Additionally, a full user study will be conducted to shed light on the capabilities and performance of the device in teleoperation over a broad range of autonomy
The classification and new trends of shared control strategies in telerobotic systems: A survey
Shared control, which permits a human operator and an autonomous controller to share the control of a telerobotic system, can reduce the operator's workload and/or improve performances during the execution of tasks. Due to the great benefits of combining the human intelligence with the higher power/precision abilities of robots, the shared control architecture occupies a wide spectrum among telerobotic systems. Although various shared control strategies have been proposed, a systematic overview to tease out the relation among different strategies is still absent. This survey, therefore, aims to provide a big picture for existing shared control strategies. To achieve this, we propose a categorization method and classify the shared control strategies into 3 categories: Semi-Autonomous control (SAC), State-Guidance Shared Control (SGSC), and State-Fusion Shared Control (SFSC), according to the different sharing ways between human operators and autonomous controllers. The typical scenarios in using each category are listed and the advantages/disadvantages and open issues of each category are discussed. Then, based on the overview of the existing strategies, new trends in shared control strategies, including the “autonomy from learning” and the “autonomy-levels adaptation,” are summarized and discussed
On the Value of Estimating Human Arm Stiffness during Virtual Teleoperation with Robotic Manipulators
Teleoperated robotic systems are widely spreading in multiple different fields, from hazardous environments exploration to surgery. In teleoperation, users directly manipulate a master device to achieve task execution at the slave robot side; this interaction is fundamental to guarantee both system stability and task execution performance. In this work, we propose a non-disruptive method to study the arm endpoint stiffness. We evaluate how users exploit the kinetic redundancy of the arm to achieve stability and precision during the execution of different tasks with different master devices. Four users were asked to perform two planar trajectories following virtual tasks using both a serial and a parallel link master device. Users' arm kinematics and muscular activation were acquired and combined with a user-specific musculoskeletal model to estimate the joint stiffness. Using the arm kinematic Jacobian, the arm end-point stiffness was derived. The proposed non-disruptive method is capable of estimating the arm endpoint stiffness during the execution of virtual teleoperated tasks. The obtained results are in accordance with the existing literature in human motor control and show, throughout the tested trajectory, a modulation of the arm endpoint stiffness that is affected by task characteristics and hand speed and acceleration
Force feedback stabilization for remote control of an assistive mobile robot
International audienceIn this paper, we consider a bilateral control of an assistive mobile robot over communication channels with constant/variable time delays. The mobile robot is used for exploring a domestic environment. The main purpose of the present work is to help the human in controlling better the slave robot. In addition, the proposed control scheme improves the operator perception of the remote environment. The human-operator can actively control the mobile robot, using its intrinsic sensors, and ''feel" the robot's environment. The haptic device is used like a joystick and controls the linear velocity and heading angle of the mobile robot. Many experiments have been performed to validate the proposed control scheme, and to show, in the same time, the importance of the force feedback in such applications and accessibility situations : doorways, obstacle exploration, wall tracking, etc
Recommended from our members
A demonstration and comparative analysis of haptic performance using a Gough-Stewart platform as a wearable haptic feedback device
In many hazardous work environments, contact tasks ranging from manufacturing to disassembly to emergency response are performed by industrial manipulators. Due to the hazardous and complex nature of these environments, teleoperation is often employed. When such is the case, the operator is left to interpret a large amount of data during task completion due to the complexity of modern robotic systems and the possible complexity of the tasks. This information is usually processed visually but can lead to sensory overload. To mitigate this, the information processing can also be distributed through other modes of sensory such as auditory or haptic. The University of Texas at Austin's TeMoto hands-free interface reduces the burden on the operator of commanding remote systems by enabling the use of gestural and verbal commands to complete a range of tasks, but the removal of a mechanical interactive device from the operator interface complicates the inclusion of haptic feedback. In this work, a standalone Gough-Stewart platform previously configured as a wearable haptic feedback device for the Nuclear and Applied Robotics Group at the University of Texas at Austin provides real-time haptic feedback to the unconstrained hand(s) of the operator. In doing so, this haptic interface can be employed with the intent of enhancing situational awareness and minimizing operator stress by imparting forces and torques to the user based on those imparted on the end-effector of the industrial manipulator. While multiple technical issues and human factor issues must be addressed, this effort focuses on integrating the system and evaluating its performance for various industrial manipulator designs and sensor modalities. After testing various digital signal processing techniques, functionality was demonstrated among one series-elastic and two rigid industrial manipulators, each with different force/torque data acquisition characteristics and a comparative analysis in haptic performance was performed. Furthermore, it was demonstrated with the TeMoto hands-free teleoperation system. Overall, the demonstrations and experiments performed in this work prove the system to be a viable, hardware agnostic means of haptic feedback and a strong basis for future effortsMechanical Engineerin
- …