172 research outputs found

    Increasing Transparency and Presence of Teleoperation Systems Through Human-Centered Design

    Get PDF
    Teleoperation allows a human to control a robot to perform dexterous tasks in remote, dangerous, or unreachable environments. A perfect teleoperation system would enable the operator to complete such tasks at least as easily as if he or she was to complete them by hand. This ideal teleoperator must be perceptually transparent, meaning that the interface appears to be nearly nonexistent to the operator, allowing him or her to focus solely on the task environment, rather than on the teleoperation system itself. Furthermore, the ideal teleoperation system must give the operator a high sense of presence, meaning that the operator feels as though he or she is physically immersed in the remote task environment. This dissertation seeks to improve the transparency and presence of robot-arm-based teleoperation systems through a human-centered design approach, specifically by leveraging scientific knowledge about the human motor and sensory systems. First, this dissertation aims to improve the forward (efferent) teleoperation control channel, which carries information from the human operator to the robot. The traditional method of calculating the desired position of the robot\u27s hand simply scales the measured position of the human\u27s hand. This commonly used motion mapping erroneously assumes that the human\u27s produced motion identically matches his or her intended movement. Given that humans make systematic directional errors when moving the hand under conditions similar to those imposed by teleoperation, I propose a new paradigm of data-driven human-robot motion mappings for teleoperation. The mappings are determined by having the human operator mimic the target robot as it autonomously moves its arm through a variety of trajectories in the horizontal plane. Three data-driven motion mapping models are described and evaluated for their ability to correct for the systematic motion errors made in the mimicking task. Individually-fit and population-fit versions of the most promising motion mapping model are then tested in a teleoperation system that allows the operator to control a virtual robot. Results of a user study involving nine subjects indicate that the newly developed motion mapping model significantly increases the transparency of the teleoperation system. Second, this dissertation seeks to improve the feedback (afferent) teleoperation control channel, which carries information from the robot to the human operator. We aim to improve a teleoperation system a teleoperation system by providing the operator with multiple novel modalities of haptic (touch-based) feedback. We describe the design and control of a wearable haptic device that provides kinesthetic grip-force feedback through a geared DC motor and tactile fingertip-contact-and-pressure and high-frequency acceleration feedback through a pair of voice-coil actuators mounted at the tips of the thumb and index finger. Each included haptic feedback modality is known to be fundamental to direct task completion and can be implemented without great cost or complexity. A user study involving thirty subjects investigated how these three modalities of haptic feedback affect an operator\u27s ability to control a real remote robot in a teleoperated pick-and-place task. This study\u27s results strongly support the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and show more mixed effects of fingertip-contact-and-pressure feedback

    Motion Mappings for Continuous Bilateral Teleoperation

    Full text link
    Mapping operator motions to a robot is a key problem in teleoperation. Due to differences between workspaces, such as object locations, it is particularly challenging to derive smooth motion mappings that fulfill different goals (e.g. picking objects with different poses on the two sides or passing through key points). Indeed, most state-of-the-art methods rely on mode switches, leading to a discontinuous, low-transparency experience. In this paper, we propose a unified formulation for position, orientation and velocity mappings based on the poses of objects of interest in the operator and robot workspaces. We apply it in the context of bilateral teleoperation. Two possible implementations to achieve the proposed mappings are studied: an iterative approach based on locally-weighted translations and rotations, and a neural network approach. Evaluations are conducted both in simulation and using two torque-controlled Franka Emika Panda robots. Our results show that, despite longer training times, the neural network approach provides faster mapping evaluations and lower interaction forces for the operator, which are crucial for continuous, real-time teleoperation.Comment: Accepted for publication at the IEEE Robotics and Automation Letters (RA-L

    Hand-worn Haptic Interface for Drone Teleoperation

    Full text link
    Drone teleoperation is usually accomplished using remote radio controllers, devices that can be hard to master for inexperienced users. Moreover, the limited amount of information fed back to the user about the robot's state, often limited to vision, can represent a bottleneck for operation in several conditions. In this work, we present a wearable interface for drone teleoperation and its evaluation through a user study. The two main features of the proposed system are a data glove to allow the user to control the drone trajectory by hand motion and a haptic system used to augment their awareness of the environment surrounding the robot. This interface can be employed for the operation of robotic systems in line of sight (LoS) by inexperienced operators and allows them to safely perform tasks common in inspection and search-and-rescue missions such as approaching walls and crossing narrow passages with limited visibility conditions. In addition to the design and implementation of the wearable interface, we performed a systematic study to assess the effectiveness of the system through three user studies (n = 36) to evaluate the users' learning path and their ability to perform tasks with limited visibility. We validated our ideas in both a simulated and a real-world environment. Our results demonstrate that the proposed system can improve teleoperation performance in different cases compared to standard remote controllers, making it a viable alternative to standard Human-Robot Interfaces.Comment: Accepted at the IEEE International Conference on Robotics and Automation (ICRA) 202

    Nonlinearity Compensation in a Multi-DoF Shoulder Sensing Exosuit for Real-Time Teleoperation

    Get PDF
    The compliant nature of soft wearable robots makes them ideal for complex multiple degrees of freedom (DoF) joints, but also introduce additional structural nonlinearities. Intuitive control of these wearable robots requires robust sensing to overcome the inherent nonlinearities. This paper presents a joint kinematics estimator for a bio-inspired multi-DoF shoulder exosuit capable of compensating the encountered nonlinearities. To overcome the nonlinearities and hysteresis inherent to the soft and compliant nature of the suit, we developed a deep learning-based method to map the sensor data to the joint space. The experimental results show that the new learning-based framework outperforms recent state-of-the-art methods by a large margin while achieving 12ms inference time using only a GPU-based edge-computing device. The effectiveness of our combined exosuit and learning framework is demonstrated through real-time teleoperation with a simulated NAO humanoid robot.Comment: 8 pages, 7 figures, 3 tables. Accepted to be published in IEEE RoboSoft 202

    Task Dynamics of Prior Training Influence Visual Force Estimation Ability During Teleoperation

    Full text link
    The lack of haptic feedback in Robot-assisted Minimally Invasive Surgery (RMIS) is a potential barrier to safe tissue handling during surgery. Bayesian modeling theory suggests that surgeons with experience in open or laparoscopic surgery can develop priors of tissue stiffness that translate to better force estimation abilities during RMIS compared to surgeons with no experience. To test if prior haptic experience leads to improved force estimation ability in teleoperation, 33 participants were assigned to one of three training conditions: manual manipulation, teleoperation with force feedback, or teleoperation without force feedback, and learned to tension a silicone sample to a set of force values. They were then asked to perform the tension task, and a previously unencountered palpation task, to a different set of force values under teleoperation without force feedback. Compared to the teleoperation groups, the manual group had higher force error in the tension task outside the range of forces they had trained on, but showed better speed-accuracy functions in the palpation task at low force levels. This suggests that the dynamics of the training modality affect force estimation ability during teleoperation, with the prior haptic experience accessible if formed under the same dynamics as the task.Comment: 12 pages, 8 figure
    corecore