32 research outputs found

    A laboratory breadboard system for dual-arm teleoperation

    Get PDF
    The computing architecture of a novel dual-arm teleoperation system is described. The novelty of this system is that: (1) the master arm is not a replica of the slave arm; it is unspecific to any manipulator and can be used for the control of various robot arms with software modifications; and (2) the force feedback to the general purpose master arm is derived from force-torque sensor data originating from the slave hand. The computing architecture of this breadboard system is a fully synchronized pipeline with unique methods for data handling, communication and mathematical transformations. The computing system is modular, thus inherently extendable. The local control loops at both sites operate at 100 Hz rate, and the end-to-end bilateral (force-reflecting) control loop operates at 200 Hz rate, each loop without interpolation. This provides high-fidelity control. This end-to-end system elevates teleoperation to a new level of capabilities via the use of sensors, microprocessors, novel electronics, and real-time graphics displays. A description is given of a graphic simulation system connected to the dual-arm teleoperation breadboard system. High-fidelity graphic simulation of a telerobot (called Phantom Robot) is used for preview and predictive displays for planning and for real-time control under several seconds communication time delay conditions. High fidelity graphic simulation is obtained by using appropriate calibration techniques

    Model Based Teleoperation to Eliminate Feedback Delay NSF Grant BCS89-01352 Second Report

    Get PDF
    We are conducting research in the area of teleoperation with feedback delay. Delay occurs with earth-based teleoperation in space and with surface-based teleoperation with untethered submersibles when acoustic communication links are involved. The delay in obtaining position and force feedback from remote slave arms makes teleoperation extremely difficult leading to very low productivity. We have combined computer graphics with manipulator programming to provide a solution to the problem. A teleoperator master arm is interfaced to a graphics based simulator of the remote environment. The system is then coupled with a robot manipulator at the remote, delayed site. The operator\u27s actions are monitored to provide both kinesthetic and visual feedback and to generate symbolic motion commands to the remote slave. The slave robot then executes these symbolic commands delayed in time. While much of a task proceeds error free, when an error does occur, the slave system transmits data back to the master environment which is then reset to the error state from which the operator continues the task

    Intent-Recognition-Based Traded Control for Telerobotic Assembly over High-Latency Telemetry

    Get PDF
    As we deploy robotic manipulation systems into unstructured real-world environments, the tasks which those robots are expected to perform grow very quickly in complexity. These tasks require a greater number of possible actions, more variable environmental conditions, and larger varieties of objects and materials which need to be manipulated. This in turn leads to a greater number of ways in which elements of a task can fail. When the cost of task failure is high, such as in the case of surgery or on-orbit robotic interventions, effective and efficient task recovery is essential. Despite ever-advancing capabilities, however, the current and near future state-of-the-art in fully autonomous robotic manipulation is still insufficient for many tasks in these critical applications. Thus, successful application of robotic manipulation in many application domains still necessitates a human operator to directly teleoperate the robots over some communications infrastructure. However, any such infrastructure always incurs some unavoidable round-trip telemetry latency depending on the distances involved and the type of remote environment. While direct teleoperation is appropriate when a human operator is physically close to the robots being controlled, there are still many applications in which such proximity is infeasible. In applications which require a robot to be far from its human operator, this latency can approach the speed of the relevant task dynamics, and performing the task with direct telemanipulation can become increasingly difficult, if not impossible. For example, round-trip delays for ground-controlled on-orbit robotic manipulation can reach multiple seconds depending on the infrastructure used and the location of the remote robot. The goal of this thesis is to advance the state-of-the art in semi-autonomous telemanipulation under multi-second round-trip communications latency between a human operator and remote robot in order to enable more telerobotic applications. We propose a new intent-recognition-based traded control (IRTC) approach which automatically infers operator intent and executes task elements which the human operator would otherwise be unable to perform. What makes our approach more powerful than the current approaches is that we prioritize preserving the operator's direct manual interaction with the remote environment while only trading control over to an autonomous subsystem when the operator-local intent recognition system automatically determines what the operator is trying to accomplish. This enables operators to perform unstructured and a priori unplanned actions in order to quickly recover from critical task failures. Furthermore, this thesis also describes a methodology for introducing and improving semi-autonomous control in critical applications. Specifically, this thesis reports (1) the demonstration of a prototype system for IRTC-based grasp assistance in the context of transatlantic telemetry delays, (2) the development of a systems framework for IRTC in semi-autonomous telemanipulation, and (3) an evaluation of the usability and efficacy of that framework with an increasingly complex assembly task. The results from our human subjects experiments show that, when incorporated with sufficient lower-level capabilities, IRTC is a promising approach to extend the reach and capabilities of on-orbit telerobotics and future in-space operations

    Mitigating User Frustration through Adaptive Feedback based on Human-Automation Etiquette Strategies

    Get PDF
    The objective of this study is to investigate the effects of feedback and user frustration in human-computer interaction (HCI) and examine how to mitigate user frustration through feedback based on human-automation etiquette strategies. User frustration in HCI indicates a negative feeling that occurs when efforts to achieve a goal are impeded. User frustration impacts not only the communication with the computer itself, but also productivity, learning, and cognitive workload. Affect-aware systems have been studied to recognize user emotions and respond in different ways. Affect-aware systems need to be adaptive systems that change their behavior depending on users’ emotions. Adaptive systems have four categories of adaptations. Previous research has focused on primarily function allocation and to a lesser extent information content and task scheduling. However, the fourth approach, changing the interaction styles is the least explored because of the interplay of human factors considerations. Three interlinked studies were conducted to investigate the consequences of user frustration and explore mitigation techniques. Study 1 showed that delayed feedback from the system led to higher user frustration, anger, cognitive workload, and physiological arousal. In addition, delayed feedback decreased task performance and system usability in a human-robot interaction (HRI) context. Study 2 evaluated a possible approach of mitigating user frustration by applying human-human etiquette strategies in a tutoring context. The results of Study 2 showed that changing etiquette strategies led to changes in performance, motivation, confidence, and satisfaction. The most effective etiquette strategies changed when users were frustrated. Based on these results, an adaptive tutoring system prototype was developed and evaluated in Study 3. By utilizing a rule set derived from Study 2, the tutor was able to use different automation etiquette strategies to target and improve motivation, confidence, satisfaction, and performance using different strategies, under different levels of user frustration. This work establishes that changing the interaction style alone of a computer tutor can affect a user’s motivation, confidence, satisfaction, and performance. Furthermore, the beneficial effect of changing etiquette strategies is greater when users are frustrated. This work provides a basis for future work to develop affect-aware adaptive systems to mitigate user frustration

    Model Driven Robotic Assistance for Human-Robot Collaboration

    Get PDF
    While robots routinely perform complex assembly tasks in highly structured factory environments, it is challenging to apply completely autonomous robotic systems in less structured manipulation tasks, such as surgery and machine assembly/repair, due to the limitations of machine intelligence, sensor data interpretation and environment modeling. A practical, yet effective approach to accomplish these tasks is through human-robot collaboration, in which the human operator and the robot form a partnership and complement each other in performing a complex task. We recognize that humans excel at determining task goals and recognizing constraints, if given sufficient feedback about the interaction between the tool (e.g., end-effector of the robot) and the environment. Robots are precise, unaffected by fatigue and able to work in environments not suitable for humans. We hypothesize that by providing the operator with adequate information about the task, through visual and force (haptic) feedback, the operator can: (1) define the task model, in terms of task goals and virtual fixture constraints through an interactive, or immersive augmented reality interface, and (2) have the robot actively assist the operator to enhance the execution time, quality and precision of the tasks. We validate our approaches through the implementations of both cooperative (i.e., hands-on) control and telerobotic systems, for image-guided robotic neurosurgery and telerobotic manipulation tasks for satellite servicing under significant time delay

    Teleprogramming: Overcoming Communication Delays in Remote Manipulation (Dissertation Proposal)

    Get PDF
    Modern industrial processes (nuclear, chemical industry), public service needs (firefighting, rescuing), and research interests (undersea, outer space exploration) have established a clear need to perform work remotely. Whereas a purely autonomous manipulative capability would solve the problem, its realization is beyond the state of the art in robotics [Stark et al.,1988]. Some of the problems plaguing the development of autonomous systems are: a) anticipation, detection, and correction of the multitude of possible error conditions arising during task execution, b) development of general strategy planning techniques transcending any particular limited task domain, c) providing the robot system with real-time adaptive behavior to accommodate changes in the remote environment, d) allowing for on-line learning and performance improvement through experience , etc. The classical approach to tackle some of these problems has been to introduce problem solvers and expert systems as part of the remote robot workcell control system. However, such systems tend to be limited in scope (to remain intellectually and implementationally manageable), too slow to be useful in real-time robot task execution, and generally fail to adequately represent and model the complexities of the real world environment. These problems become particularly severe when only partial information about the remote environment is available

    Spatial-Temporal Characteristics of Multisensory Integration

    Get PDF
    abstract: We experience spatial separation and temporal asynchrony between visual and haptic information in many virtual-reality, augmented-reality, or teleoperation systems. Three studies were conducted to examine the spatial and temporal characteristic of multisensory integration. Participants interacted with virtual springs using both visual and haptic senses, and their perception of stiffness and ability to differentiate stiffness were measured. The results revealed that a constant visual delay increased the perceived stiffness, while a variable visual delay made participants depend more on the haptic sensations in stiffness perception. We also found that participants judged stiffness stiffer when they interact with virtual springs at faster speeds, and interaction speed was positively correlated with stiffness overestimation. In addition, it has been found that participants could learn an association between visual and haptic inputs despite the fact that they were spatially separated, resulting in the improvement of typing performance. These results show the limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian inference model should be used.Dissertation/ThesisDoctoral Dissertation Human Systems Engineering 201
    corecore