218 research outputs found

    Sensorless Physical Human-robot Interaction Using Deep-Learning

    Full text link
    Physical human-robot interaction has been an area of interest for decades. Collaborative tasks, such as joint compliance, demand high-quality joint torque sensing. While external torque sensors are reliable, they come with the drawbacks of being expensive and vulnerable to impacts. To address these issues, studies have been conducted to estimate external torques using only internal signals, such as joint states and current measurements. However, insufficient attention has been given to friction hysteresis approximation, which is crucial for tasks involving extensive dynamic to static state transitions. In this paper, we propose a deep-learning-based method that leverages a novel long-term memory scheme to achieve dynamics identification, accurately approximating the static hysteresis. We also introduce modifications to the well-known Residual Learning architecture, retaining high accuracy while reducing inference time. The robustness of the proposed method is illustrated through a joint compliance and task compliance experiment.Comment: 7 pages, ICRA 2024 Submissio

    Development of a Virtual Collision Sensor for Industrial Robots

    Get PDF
    Collision detection is a fundamental issue for the safety of a robotic cell. While several common methods require specific sensors or the knowledge of the robot dynamic model, the proposed solution is constituted by a virtual collision sensor for industrial manipulators, which requires as inputs only the motor currents measured by the standard sensors that equip a manipulator and the estimated currents provided by an internal dynamic model of the robot (i.e., the one used inside its controller), whose structure, parameters and accuracy are not known. The collision detection is achieved by comparing the absolute value of the current residue with a time-varying, positive-valued threshold function, including an estimate of the model error and a bias term, corresponding to the minimum collision torque to be detected. The value of such a term, defining the sensor sensitivity, can be simply imposed as constant, or automatically customized for a specific robotic application through a learning phase and a subsequent adaptation process, to achieve a more robust and faster collision detection, as well as the avoidance of any false collision warnings, even in case of slow variations of the robot behavior. Experimental results are provided to confirm the validity of the proposed solution, which is already adopted in some industrial scenarios

    Contact force and torque estimation for collaborative manipulators based on an adaptive Kalman filter with variable time period.

    Get PDF
    Contact force and torque sensing approaches enable manipulators to cooperate with humans and to interact appropriately with unexpected collisions. In this thesis, various moving averages are investigated and Weighted Moving Averages and Hull Moving Average are employed to generate a mode-switching moving average to support force sensing. The proposed moving averages with variable time period were used to reduce the effects of measured motor current noise and thus provide improved confidence in joint output torque estimation. The time period of the filter adapts continuously to achieve an optimal trade-off between response time and precision of estimation in real-time. An adaptive Kalman filter that consists of the proposed moving averages and the conventional Kalman filter is proposed. Calibration routines for the adaptive Kalman filter interpret the measured motor current noise and errors in the speed data from the individual joints into. The combination of the proposed adaptive Kalman filter with variable time period and its calibration method facilitates force and torque estimation without direct measurement via force/torque sensors. Contact force/torque sensing and response time assessments from the proposed approach are performed on both the single Universal Robot 5 manipulator and the collaborative UR5 arrangement (dual-arm robot) with differing unexpected end effector loads. The combined force and torque sensing method leads to a reduction of the estimation errors and response time in comparison with the pioneering method (55.2% and 20.8 %, respectively), and the positive performance of the proposed approach is further improved as the payload rises. The proposed method can potentially be applied to any robotic manipulators as long as the motor information (current, joint position, and joint velocities) are available. Consequently the cost of implementation will be significantly lower than methods that require load cells

    Self-Collision Avoidance Control of Dual-Arm Multi-Link Robot Using Neural Network Approach

    Get PDF
    The problem of mutual collisions of manipulators of a dual-arm multi-link robot (so-called self-collisions) arises during the performance of a cooperative technological operation. Self-collisions can lead to non-fulfillment of the technological operation or even to the failure of the manipulators. In this regard, it is necessary to develop a method for online detection and avoidance of self-collisions of manipulators. The article presents a method for detecting and avoiding self-collisions of multi-link manipulators using an artificial neural network by the example of the dual-arm robot SAR-401. A comparative analysis is carried out and the architecture of an artificial neural network for self-collisions avoidance control of dual-arm robot manipulators is proposed. The novelty of the proposed approach lies in the fact that it is an alternative to the generally accepted methods of detecting self-collisions based on the numerical solution of inverse kinematics problems for manipulators in the form of nonlinear optimization problems. Experimental results performed based on MATLAB model, the simulator of the robot SAR-401 and on the real robot itself confirmed the applicability and effectiveness of the proposed approach. It is shown that the detection of possible self-collisions using the proposed method based on an artificial neural network is performed approximately 10 times faster than approaches based on the numerical solution of the inverse kinematics problem while maintaining the specified accuracy

    Contact aware robust semi-autonomous teleoperation of mobile manipulators

    Get PDF
    In the context of human-robot collaboration, cooperation and teaming, the use of mobile manipulators is widespread on applications involving unpredictable or hazardous environments for humans operators, like space operations, waste management and search and rescue on disaster scenarios. Applications where the manipulator's motion is controlled remotely by specialized operators. Teleoperation of manipulators is not a straightforward task, and in many practical cases represent a common source of failures. Common issues during the remote control of manipulators are: increasing control complexity with respect the mechanical degrees of freedom; inadequate or incomplete feedback to the user (i.e. limited visualization or knowledge of the environment); predefined motion directives may be incompatible with constraints or obstacles imposed by the environment. In the latter case, part of the manipulator may get trapped or blocked by some obstacle in the environment, failure that cannot be easily detected, isolated nor counteracted remotely. While control complexity can be reduced by the introduction of motion directives or by abstraction of the robot motion, the real-time constraint of the teleoperation task requires the transfer of the least possible amount of data over the system's network, thus limiting the number of physical sensors that can be used to model the environment. Therefore, it is of fundamental to define alternative perceptive strategies to accurately characterize different interaction with the environment without relying on specific sensory technologies. In this work, we present a novel approach for safe teleoperation, that takes advantage of model based proprioceptive measurement of the robot dynamics to robustly identify unexpected collisions or contact events with the environment. Each identified collision is translated on-the-fly into a set of local motion constraints, allowing the exploitation of the system redundancies for the computation of intelligent control laws for automatic reaction, without requiring human intervention and minimizing the disturbance of the task execution (or, equivalently, the operator efforts). More precisely, the described system consist in two different building blocks. The first, for detecting unexpected interactions with the environment (perceptive block). The second, for intelligent and autonomous reaction after the stimulus (control block). The perceptive block is responsible of the contact event identification. In short, the approach is based on the claim that a sensorless collision detection method for robot manipulators can be extended to the field of mobile manipulators, by embedding it within a statistical learning framework. The control deals with the intelligent and autonomous reaction after the contact or impact with the environment occurs, and consist on an motion abstraction controller with a prioritized set of constrains, where the highest priority correspond to the robot reconfiguration after a collision is detected; when all related dynamical effects have been compensated, the controller switch again to the basic control mode

    On Sensorless Collision Detection and Measurement of External Forces in Presence of Modeling Inaccuracies

    Get PDF
    The field of human-robot interaction has garnered significant interest in the last decade. Every form of human-robot coexistence must guarantee the safety of the user. Safety in human-robot interaction is being vigorously studied, in areas such as collision avoidance, soft actuators, light-weight robots, computer vision techniques, soft tissue modeling, collision detection, etc. Despite the safety provisions, unwanted collisions can occur in case of system faults. In such cases, before post-collision strategies are triggered, it is imperative to effectively detect the collisions. Implementation of tactile sensors, vision systems, sonar and Lidar sensors, etc., allows for detection of collisions. However, due to the cost of such methods, more practical approaches are being investigated. A general goal remains to develop methods for fast detection of external contacts using minimal sensory information. Availability of position data and command torques in manipulators permits development of observer-based techniques to measure external forces/torques. The presence of disturbances and inaccuracies in the model of the robot presents challenges in the efficacy of observers in the context of collision detection. The purpose of this thesis is to develop methods that reduce the effects of modeling inaccuracies in external force/torque estimation and increase the efficacy of collision detection. It is comprised of the following four parts: 1. The KUKA Light-Weight Robot IV+ is commonly employed for research purposes. The regressor matrix, minimal inertial parameters and the friction model of this robot are identified and presented in detail. To develop the model, relative weight analysis is employed for identification. 2. Modeling inaccuracies and robot state approximation errors are considered simultaneously to develop model-based time-varying thresholds for collision detection. A metric is formulated to compare trajectories realizing the same task in terms of their collision detection and external force/torque estimation capabilities. A method for determining optimal trajectories with regards to accurate external force/torque estimation is also developed. 3. The effects of velocity on external force/torque estimation errors are studied with and without the use of joint force/torque sensors. Velocity-based thresholds are developed and implemented to improve collision detection. The results are compared with the collision detection module integrated in the KUKA Light-Weight Robot IV+. 4. An alternative joint-by-joint heuristic method is proposed to identify the effects of modeling inaccuracies on external force/torque estimation. Time-varying collision detection thresholds associated with the heuristic method are developed and compared with constant thresholds. In this work, the KUKA Light-Weight Robot IV+ is used for obtaining the experimental results. This robot is controlled via the Fast Research Interface and Visual C++ 2008. The experimental results confirm the efficacy of the proposed methodologies

    Collision Detection and Reaction: A Contribution to Safe Physical Human-Robot Interaction

    Get PDF
    In the framework of physical Human-Robot Interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective “safety” feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented

    Variable Stiffness Link (VSL): Toward inherently safe robotic manipulators

    Get PDF
    © 2017 IEEE. Nowadays, the field of industrial robotics focuses particularly on collaborative robots that are able to work closely together with a human worker in an inherently safe way. To detect and prevent harmful collisions, a number of solutions both from the actuation and sensing sides have been suggested. However, due to the rigid body structures of the majority of systems, the risk of harmful collisions with human operators in a collaborative environment remains. In this paper, we propose a novel concept for a collaborative robot made of Variable Stiffness Links (VSLs). The idea is to use a combination of silicone based structures and fabric materials to create stiffness-controllable links that are pneumatically actuated. According to the application, it is possible to change the stiffness of the links by varying the value of pressure inside their structure. Moreover, the pressure readings from the pressure sensors inside the regulators can be utilised to detect collisions between the manipulator body and a human worker, for instance. A set of experiments are performed with the aim to assess the performance of the VSL when embedded in a robotic manipulator. The effects of different loads and pressures on the workspace of the manipulator are evaluated together with the efficiency of the collision detection control system and hardware
    corecore