4 research outputs found

    Contact aware robust semi-autonomous teleoperation of mobile manipulators

    Get PDF
    In the context of human-robot collaboration, cooperation and teaming, the use of mobile manipulators is widespread on applications involving unpredictable or hazardous environments for humans operators, like space operations, waste management and search and rescue on disaster scenarios. Applications where the manipulator's motion is controlled remotely by specialized operators. Teleoperation of manipulators is not a straightforward task, and in many practical cases represent a common source of failures. Common issues during the remote control of manipulators are: increasing control complexity with respect the mechanical degrees of freedom; inadequate or incomplete feedback to the user (i.e. limited visualization or knowledge of the environment); predefined motion directives may be incompatible with constraints or obstacles imposed by the environment. In the latter case, part of the manipulator may get trapped or blocked by some obstacle in the environment, failure that cannot be easily detected, isolated nor counteracted remotely. While control complexity can be reduced by the introduction of motion directives or by abstraction of the robot motion, the real-time constraint of the teleoperation task requires the transfer of the least possible amount of data over the system's network, thus limiting the number of physical sensors that can be used to model the environment. Therefore, it is of fundamental to define alternative perceptive strategies to accurately characterize different interaction with the environment without relying on specific sensory technologies. In this work, we present a novel approach for safe teleoperation, that takes advantage of model based proprioceptive measurement of the robot dynamics to robustly identify unexpected collisions or contact events with the environment. Each identified collision is translated on-the-fly into a set of local motion constraints, allowing the exploitation of the system redundancies for the computation of intelligent control laws for automatic reaction, without requiring human intervention and minimizing the disturbance of the task execution (or, equivalently, the operator efforts). More precisely, the described system consist in two different building blocks. The first, for detecting unexpected interactions with the environment (perceptive block). The second, for intelligent and autonomous reaction after the stimulus (control block). The perceptive block is responsible of the contact event identification. In short, the approach is based on the claim that a sensorless collision detection method for robot manipulators can be extended to the field of mobile manipulators, by embedding it within a statistical learning framework. The control deals with the intelligent and autonomous reaction after the contact or impact with the environment occurs, and consist on an motion abstraction controller with a prioritized set of constrains, where the highest priority correspond to the robot reconfiguration after a collision is detected; when all related dynamical effects have been compensated, the controller switch again to the basic control mode

    Design and implementation of a new teleoperation control mode for differential drive UGVs

    No full text
    In this paper, we propose and implement a new control mode for teleoperated unmanned ground vehicles (UGVs), that exploits the similarities between computer games and teleoperation robotics. Today, all teleoperated differential drive UGVs use a control mode called Tank Control, in which the UGV chassis and the pan tilt camera are controlled separately. This control mode was also the dominating choice when the computer game genre First Person Shooter (FPS) first appeared. However, the hugely successful FPS genre, including titles such as Doom, Half Life and Call of Duty, now uses a much more intuitive control mode, Free Look Control (FLC), in which rotation and translation of the character are decoupled, and controlled separately. The main contribution of this paper is that we replace Tank Control with FLC in a real UGV. Using feedback linearization, the orientation of the UGV chassis is abstracted away, and the orientation and translation of the camera are decoupled, enabling the operator to use FLC when controlling the UGV. This decoupling is then experimentally verified. The developments in the gaming community indicates that FLC is more intuitive than Tank Control and reduces the well known situational awareness problem. It furthermore reduces the need for operator training, since literary millions of future operators have already spent hundreds of hours using the interface.QC 20140515TRAD

    Design and implementation of a new teleoperation control mode for differential drive UGVs

    No full text
    In this paper, we propose and implement a new control mode for teleoperated unmanned ground vehicles (UGVs), that exploits the similarities between computer games and teleoperation robotics. Today, all teleoperated differential drive UGVs use a control mode called Tank Control, in which the UGV chassis and the pan tilt camera are controlled separately. This control mode was also the dominating choice when the computer game genre First Person Shooter (FPS) first appeared. However, the hugely successful FPS genre, including titles such as Doom, Half Life and Call of Duty, now uses a much more intuitive control mode, Free Look Control (FLC), in which rotation and translation of the character are decoupled, and controlled separately. The main contribution of this paper is that we replace Tank Control with FLC in a real UGV. Using feedback linearization, the orientation of the UGV chassis is abstracted away, and the orientation and translation of the camera are decoupled, enabling the operator to use FLC when controlling the UGV. This decoupling is then experimentally verified. The developments in the gaming community indicates that FLC is more intuitive than Tank Control and reduces the well known situational awareness problem. It furthermore reduces the need for operator training, since literary millions of future operators have already spent hundreds of hours using the interface.QC 20140515TRAD
    corecore