9 research outputs found

    Motion Mappings for Continuous Bilateral Teleoperation

    Full text link
    Mapping operator motions to a robot is a key problem in teleoperation. Due to differences between workspaces, such as object locations, it is particularly challenging to derive smooth motion mappings that fulfill different goals (e.g. picking objects with different poses on the two sides or passing through key points). Indeed, most state-of-the-art methods rely on mode switches, leading to a discontinuous, low-transparency experience. In this paper, we propose a unified formulation for position, orientation and velocity mappings based on the poses of objects of interest in the operator and robot workspaces. We apply it in the context of bilateral teleoperation. Two possible implementations to achieve the proposed mappings are studied: an iterative approach based on locally-weighted translations and rotations, and a neural network approach. Evaluations are conducted both in simulation and using two torque-controlled Franka Emika Panda robots. Our results show that, despite longer training times, the neural network approach provides faster mapping evaluations and lower interaction forces for the operator, which are crucial for continuous, real-time teleoperation.Comment: Accepted for publication at the IEEE Robotics and Automation Letters (RA-L

    Shared Control Templates for Assistive Robotics

    Get PDF
    Light-weight robotic manipulators can be used to restore the manipulation capability of people with a motor disability. However, manipulating the environment poses a complex task, especially when the control interface is of low bandwidth, as may be the case for users with impairments. Therefore, we propose a constraint-based shared control scheme to define skills which provide support during task execution. This is achieved by representing a skill as a sequence of states, with specific user command mappings and different sets of constraints being applied in each state. New skills are defined by combining different types of constraints and conditions for state transitions, in a human-readable format. We demonstrate its versatility in a pilot experiment with three activities of daily living. Results show that even complex, high-dimensional tasks can be performed with a low-dimensional interface using our shared control approach

    Dynamic movement primitives-based human action prediction and shared control for bilateral robot teleoperation

    Get PDF
    This article presents a novel shared-control teleoperation framework that integrates imitation learning and bilateral control to achieve system stability based on a new dynamic movement primitives (DMPs) observer. First, a DMPs-based observer is first created to capture human operational skills through offline human demonstrations. The learning results are then used to predict human action intention in teleoperation. Compared with other observers, the DMPs-based observer incorporates human operational features and can predict long-term actions with minor errors. A high-gain observer is established to monitor the robot’s status in real time on the leader side. Subsequently, two controllers on both the follower and leader sides are constructed based on the outputs of the observers. The follower controller shares control authorities to address accidents in real-time and correct prediction errors of the observation using delayed leader commands. The leader controller minimizes position-tracking errors through force feedback. The convergence of the predictions of the DMPs-based observer under the time delays and teleoperation system stability are proved by building two Lyapunov functions. Finally, two groups of comparative experiments are conducted to verify the advantages over other methods and the effectiveness of the proposed framework in motion prediction with time delays and obstacle avoidance

    Shared Control of Mobile Robots Using Model Predictive Control

    Get PDF
    With the world constantly driving towards attaining complete autonomy, there is still a major question of safety when it comes to trusting a machine completely. Autonomous systems of today also do not have the ability to perform flawlessly in an environment that is cluttered and unstructured. This calls for the need of having a human operate the machine at all times either remotely via tele-operation methods or by being physically present alongside the machine. With tele-operation of remote systems, the cognitive load required from the human operator is high, while also the perception of the remote systems environment is low. This can cause many undesirable human errors causing damage to machinery. For example, tele-operating a forestry machine in a forest can be a very daunting task as there will be many trees and not all trees around the machine can be seen by the operator during remote tele-operation. With this in context, a few industries and sectors have now largely started research with using shared control methodologies to aid their machine in tele-operation tasks. This thesis proposes a shared control methodology to provide a certain level of autonomy to the machine while still allowing the human operator to always be in control. The proposed methodology uses a Model predictive controller as the base controller to control the robot and perform obstacle avoidance tasks. The robot considered for implementation is a differential drive mobile robot, in specific the MiR 100 from Mobile Industrial Robots. The key motivation behind the thesis is to evaluate the performance of the shared control approach against a manual tele-operation task, to better understand the advantages and possible disadvantages of using a shared control strategy. The proposed strategy is implemented using the CasADi optimization toolbox on Matlab and tested through user testings. The results obtained from the user test prove that shared control can largely help in improving the safety of the system, but not so much with performance, at least not with the proposed methodology

    Programming by demonstration for shared control with an application in teleoperation

    No full text
    Shared control strategies can improve task performance in teleoperation. In such systems, automation guides or corrects a human operator. The amount of correction or guidance that is provided is denoted the level of automation. As the variety of teleoperation tasks is large, manually specifying the underlying automation is time consuming. In this letter, we present an approach to program this automated system by demonstration. Our approach determines the level of automation online, by combining the confidence of automation and teleoperator. We present particular implementations of our approach for haptic shared control and state shared control. The method is evaluated in a user study. Although the subjects indicated they preferred the learned shared control strategies, teleoperation performance did not improve our metric (task execution time)

    Programming by Demonstration for Shared Control with an Application in Teleoperation

    No full text
    Shared control strategies can improve task-performance in teleoperation. In such systems automation guides or corrects a human operator. The amount of correction or guidance that is provided is denoted the level-of-automation. As the variety of teleoperation tasks is large, manually specifying the underlying automation is time consuming. In this work we present an approach to program this automated system by demonstration. Our approach determines the level of automation online, by combining the confidence of automation and teleoperator. We present particular implementations of our approach for haptic shared control, and state shared control. The method is evaluated in a user study. Although the users indicated they preferred the learned shared control strategies, teleoperation performance did not improve our metric (task execution time)

    Programming by Demonstration for Shared Control With an Application in Teleoperation

    No full text
    corecore