151 research outputs found

    Pseudo-haptics survey: Human-computer interaction in extended reality & teleoperation

    Get PDF
    Pseudo-haptic techniques are becoming increasingly popular in human-computer interaction. They replicate haptic sensations by leveraging primarily visual feedback rather than mechanical actuators. These techniques bridge the gap between the real and virtual worlds by exploring the brain’s ability to integrate visual and haptic information. One of the many advantages of pseudo-haptic techniques is that they are cost-effective, portable, and flexible. They eliminate the need for direct attachment of haptic devices to the body, which can be heavy and large and require a lot of power and maintenance. Recent research has focused on applying these techniques to extended reality and mid-air interactions. To better understand the potential of pseudo-haptic techniques, the authors developed a novel taxonomy encompassing tactile feedback, kinesthetic feedback, and combined categories in multimodal approaches, ground not covered by previous surveys. This survey highlights multimodal strategies and potential avenues for future studies, particularly regarding integrating these techniques into extended reality and collaborative virtual environments.info:eu-repo/semantics/publishedVersio

    The Effects Of Video Frame Delay And Spatial Ability On The Operation Of Multiple Semiautonomous And Tele-operated Robots

    Get PDF
    The United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation\u27s wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for net-centric warfare is limited. The implication is that the FCS must be designed to leverage bandwidth in a manner that does not overtax computational resources. In this study alternatives for improving human performance during operation of teleoperated and semi-autonomous robots were examined. It was predicted that when operating both types of robots, frame delay of the semi-autonomous robot would improve performance because it would allow operators to concentrate on the constant workload imposed by the teleoperated while only allocating resources to the semi-autonomous during critical tasks. An additional prediction was that operators with high spatial ability would perform better than those with low spatial ability, especially when operating an aerial vehicle. The results can not confirm that frame delay has a positive effect on operator performance, though power may have been an issue, but clearly show that spatial ability is a strong predictor of performance on robotic asset control, particularly with aerial vehicles. In operating the UAV, the high spatial group was, on average, 30% faster, lazed 12% more targets, and made 43% more location reports than the low spatial group. The implications of this study indicate that system design should judiciously manage workload and capitalize on individual ability to improve performance and are relevant to system designers, especially in the military community

    Visuo-haptic Command Interface for Control-Architecture Adaptable Teleoperation

    Get PDF
    Robotic teleoperation is the commanding of a remote robot. Depending on the operator's involvement required by a teleoperation task, the remote site is more or less autonomous. On the operator site, input and display devices record and present control-related information from and to the operator respectively. Kinaesthetic devices stimulate haptic senses, thus conveying information through the sensing of displacement, velocity and acceleration within muscles, tendons and joints. These devices have shown to excel in tasks with low autonomy while touch-screen based devices are beneficial in highly autonomous tasks. However, neither perform reliably over a broad range. This thesis examines the feasibility of the 'Motion Console Application for Novel Virtual, Augmented and Avatar Systems' (Motion CANVAAS) that unifies the input/display capabilities of kinaesthetic and visual touchscreen-based devices in order to bridge this gap. This work describes the design of the Motion CANVAAS, its construction, development and conducts an initial validation process. The Motion CANVAAS was evaluated via two pilot studies, each based on a different virtual environment: a modified Tetris application and a racing karts simulator. The target research variables were the coupling of input/display capabilities and the effect of the application-specific kinaesthetic feedback. Both studies proved the concept to be a viable solution as haptic input/output device and indicated potential advantages over current solutions. On the flip side, some of the system's limitations could be identified. With the insight gained from this work, the benefits as well as the limitations will be addressed in the future research. Additionally, a full user study will be conducted to shed light on the capabilities and performance of the device in teleoperation over a broad range of autonomy

    Human operator performance of remotely controlled tasks: Teleoperator research conducted at NASA's George C. Marshal Space Flight Center

    Get PDF
    The capabilities within the teleoperator laboratories to perform remote and teleoperated investigations for a wide variety of applications are described. Three major teleoperator issues are addressed: the human operator, the remote control and effecting subsystems, and the human/machine system performance results for specific teleoperated tasks

    EVALUATION OF HAPTIC FEEDBACK METHODS FOR TELEOPERATED EXPLOSIVE ORDNANCE DISPOSAL ROBOTS

    Get PDF
    This thesis reports on the effects of sensory substitution methods for force feedback during teleoperation of robotic systems used for Explosive Ordnance Disposal (EOD). Existing EOD robotic systems do not feature any type of haptic feedback. It is currently unknown what benefits could by gained by supplying this information to the operator. In order to assess the benefits of additional feedback, a robotic gripper was procured and instrumented in order to display the forces applied by the end effector to an object. In a contact-based event detection task, users were asked to slowly grasp an object as lightly as possible and stop when a grasp was achieved. The users were supplied with video feedback of the gripper and either (1) no haptic feedback, (2) surrogate visual feedback, or (3) surrogate vibrotactile feedback. The force information came exclusively from the current being used to drive the gripper. Peak grasp forces were measured and compared across conditions. The improvements gained from vibrotactile over no haptic feedback feedback were statistically significant and reduced the threshold at which event detection took place from an average of 8.43 N to an average of 5.97 N. Qualitative information from the users showed a significant preference for this type of feedback. Vibrotactile feedback was shown to be very useful, while surrogate visual force feedback was not found to be helpful quantitatively nor was it preferred by the users. This feedback information would be inexpensive to implement and could be easily added to existing systems, thereby improving their capabilities to the EOD technician

    Robustness analysis and controller synthesis for bilateral teleoperation systems via IQCs

    Get PDF

    An Arm-Mounted Accelerometer and Gyro-Based 3D Control System

    Get PDF
    This thesis examines the performance of a wearable accelerometer/gyroscope-based system for capturing arm motions in 3D. Two experiments conforming to ISO 9241-9 specifications for non-keyboard input devices were performed. The first, modeled after the Fitts' law paradigm described in ISO 9241-9, utilized the wearable system to control a telemanipulator compared with joystick control and the user's arm. The throughputs were 5.54 bits/s, 0.74 bits/s and 0.80 bits/s, respectively. The second experiment utilized the wearable system to control a cursor in a 3D fish-tank virtual reality setup. The participants performed a 3D Fitts' law task with three selection methods: button clicks, dwell, and a twist gesture. Error rates were 6.82 %, 0.00% and 3.59 % respectively. Throughput ranged from 0.8 to 1.0 bits/s. The thesis includes detailed analyses on lag and other issues that present user interface challenges for systems that employ human-mounted sensor inputs to control a telemanipulator apparatus

    Human Management of the Hierarchical System for the Control of Multiple Mobile Robots

    Get PDF
    In order to take advantage of autonomous robotic systems, and yet ensure successful completion of all feasible tasks, we propose a mediation hierarchy in which an operator can interact at all system levels. Robotic systems are not robust in handling un-modeled events. Reactive behaviors may be able to guide the robot back into a modeled state and to continue. Reasoning systems may simply fail. Once a system has failed it is difficult to re-start the task from the failed state. Rather, the rule base is revised, programs altered, and the task re-tried from the beginning

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts
    corecore