61 research outputs found

    Enhancing Human-Robot Collaboration Transportation through Obstacle-Aware Vibrotactile Feedback

    Full text link
    Transporting large and heavy objects can benefit from Human-Robot Collaboration (HRC), increasing the contribution of robots to our daily tasks and reducing the risk of injuries to the human operator. This approach usually posits the human collaborator as the leader, while the robot has the follower role. Hence, it is essential for the leader to be aware of the environmental situation. However, when transporting a large object, the operator's situational awareness can be compromised as the object may occlude different parts of the environment. This paper proposes a novel haptic-based environmental awareness module for a collaborative transportation framework that informs the human operator about surrounding obstacles. The robot uses two LIDARs to detect the obstacles in the surroundings. The warning module alerts the operator through a haptic belt with four vibrotactile devices that provide feedback about the location and proximity of the obstacles. By enhancing the operator's awareness of the surroundings, the proposed module improves the safety of the human-robot team in co-carrying scenarios by preventing collisions. Experiments with two non-expert subjects in two different situations are conducted. The results show that the human partner can successfully lead the co-transportation system in an unknown environment with hidden obstacles thanks to the haptic feedback.Comment: 6 pages, 5 figures, for associated video, see this https://youtu.be/UABeGPIIrH

    Performance Analysis of Vibrotactile and Slide-and-Squeeze Haptic Feedback Devices for Limbs Postural Adjustment

    Get PDF
    Recurrent or sustained awkward body postures are among the most frequently cited risk factors to the development of work-related musculoskeletal disorders (MSDs). To prevent workers from adopting harmful configurations but also to guide them toward more ergonomic ones, wearable haptic devices may be the ideal solution. In this paper, a vibrotactile unit, called ErgoTac, and a slide-and-squeeze unit, called CUFF, were evaluated in a limbs postural correction setting. Their capability of providing single-joint (shoulder or knee) and multi-joint (shoulder and knee at once) guidance was compared in twelve healthy subjects, using quantitative task-related metrics and subjective quantitative evaluation. An integrated environment was also built to ease communication and data sharing between the involved sensor and feedback systems. Results show good acceptability and intuitiveness for both devices. ErgoTac appeared as the suitable feedback device for the shoulder, while the CUFF may be the effective solution for the knee. This comparative study, although preliminary, was propaedeutic to the potential integration of the two devices for effective whole-body postural corrections, with the aim to develop a feedback and assistive apparatus to increase workers' awareness about risky working conditions and therefore to prevent MSDs

    A Self-Tuning Impedance-based Interaction Planner for Robotic Haptic Exploration

    Full text link
    This paper presents a novel interaction planning method that exploits impedance tuning techniques in response to environmental uncertainties and unpredictable conditions using haptic information only. The proposed algorithm plans the robot's trajectory based on the haptic interaction with the environment and adapts planning strategies as needed. Two approaches are considered: Exploration and Bouncing strategies. The Exploration strategy takes the actual motion of the robot into account in planning, while the Bouncing strategy exploits the forces and the motion vector of the robot. Moreover, self-tuning impedance is performed according to the planned trajectory to ensure compliant contact and low contact forces. In order to show the performance of the proposed methodology, two experiments with a torque-controller robotic arm are carried out. The first considers a maze exploration without obstacles, whereas the second includes obstacles. The proposed method performance is analyzed and compared against previously proposed solutions in both cases. Experimental results demonstrate that: i) the robot can successfully plan its trajectory autonomously in the most feasible direction according to the interaction with the environment, and ii) a compliant interaction with an unknown environment despite the uncertainties is achieved. Finally, a scalability demonstration is carried out to show the potential of the proposed method under multiple scenarios.Comment: 8 pages, 9 figures, accepted for IEEE Robotics and Automation Letters (RA-L) and IEEE/RSJ International Conference on Intelligent Robots and Systems 202

    Markerless 3D human pose tracking through multiple cameras and AI: Enabling high accuracy, robustness, and real-time performance

    Full text link
    Tracking 3D human motion in real-time is crucial for numerous applications across many fields. Traditional approaches involve attaching artificial fiducial objects or sensors to the body, limiting their usability and comfort-of-use and consequently narrowing their application fields. Recent advances in Artificial Intelligence (AI) have allowed for markerless solutions. However, most of these methods operate in 2D, while those providing 3D solutions compromise accuracy and real-time performance. To address this challenge and unlock the potential of visual pose estimation methods in real-world scenarios, we propose a markerless framework that combines multi-camera views and 2D AI-based pose estimation methods to track 3D human motion. Our approach integrates a Weighted Least Square (WLS) algorithm that computes 3D human motion from multiple 2D pose estimations provided by an AI-driven method. The method is integrated within the Open-VICO framework allowing simulation and real-world execution. Several experiments have been conducted, which have shown high accuracy and real-time performance, demonstrating the high level of readiness for real-world applications and the potential to revolutionize human motion capture.Comment: 19 pages, 7 figure

    Towards Autonomous Robotic Valve Turning

    No full text
    In this paper an autonomous intervention robotic task to learn the skill of grasping and turning a valve is described. To resolve this challenge a set of different techniques are proposed, each one realizing a specific task and sending information to the others in a Hardware-In-Loop (HIL) simulation. To improve the estimation of the valve position, an Extended Kalman Filter is designed. Also to learn the trajectory to follow with the robotic arm, Imitation Learning approach is used. In addition, to perform safely the task a fuzzy system is developed which generates appropriate decisions. Although the achievement of this task will be used in an Autonomous Underwater Vehicle, for the first step this idea has been tested in a laboratory environment with an available robot and a sensor

    Robot-Assisted Navigation for Visually Impaired through Adaptive Impedance and Path Planning

    Full text link
    This paper presents a framework to navigate visually impaired people through unfamiliar environments by means of a mobile manipulator. The Human-Robot system consists of three key components: a mobile base, a robotic arm, and the human subject who gets guided by the robotic arm via physically coupling their hand with the cobot's end-effector. These components, receiving a goal from the user, traverse a collision-free set of waypoints in a coordinated manner, while avoiding static and dynamic obstacles through an obstacle avoidance unit and a novel human guidance planner. With this aim, we also present a legs tracking algorithm that utilizes 2D LiDAR sensors integrated into the mobile base to monitor the human pose. Additionally, we introduce an adaptive pulling planner responsible for guiding the individual back to the intended path if they veer off course. This is achieved by establishing a target arm end-effector position and dynamically adjusting the impedance parameters in real-time through a impedance tuning unit. To validate the framework we present a set of experiments both in laboratory settings with 12 healthy blindfolded subjects and a proof-of-concept demonstration in a real-world scenario.Comment: 7 pages, 7 figures, submitted to IEEE International Conference on Robotics and Automation, for associated video, see https://youtu.be/B94n3QjdnJ

    Exploring Teleimpedance and Tactile Feedback for Intuitive Control of the Pisa/IIT SoftHand

    Get PDF
    This paper proposes a teleimpedance controller with tactile feedback for more intuitive control of the Pisa/IIT SoftHand. With the aim to realize a robust, efficient and low-cost hand prosthesis design, the SoftHand is developed based on the motor control principle of synergies, through which the immense complexity of the hand is simplified into distinct motor patterns. Due to the built-in flexibility of the hand joints, as the SoftHand grasps, it follows a synergistic path while allowing grasping of objects of various shapes using only a single motor. The DC motor of the hand incorporates a novel teleimpedance control in which the user's postural and stiffness synergy references are tracked in real-time. In addition, for intuitive control of the hand, two tactile interfaces are developed. The first interface (mechanotactile) exploits a disturbance observer which estimates the interaction forces in contact with the grasped object. Estimated interaction forces are then converted and applied to the upper arm of the user via a custom made pressure cuff. The second interface employs vibrotactile feedback based on surface irregularities and acceleration signals and is used to provide the user with information about the surface properties of the object as well as detection of object slippage while grasping. Grasp robustness and intuitiveness of hand control were evaluated in two sets of experiments. Results suggest that incorporating the aforementioned haptic feedback strategies, together with user-driven compliance of the hand, facilitate execution of safe and stable grasps, while suggesting that a low-cost, robust hand employing hardware-based synergies might be a good alternative to traditional myoelectric prostheses

    A Learning-based Approach to the Real-time Estimation of the Feet Ground Reaction Forces and Centres of Pressure in Humans

    Get PDF
    The feet centres of pressure (CoP) and ground reaction forces (GRF) constitute essential information in the analysis of human motion. Such variables are representative of the human dynamic behaviours, in particular when interactions with the external world are in place. Accordingly, in this paper we propose a novel approach for the real-time estimation of the human feet CoP and GRFs, using the whole-body CoP and the human body configuration. The method combines a simplified geometrical model of the whole-body CoP and a learning technique. Firstly, a statically equivalent serial chain (SESC) model which enables the whole-body CoP estimation is identified. Then, the estimated whole-body CoP and the simplified body pose information are used for the training and validation of the learning technique. The proposed feet CoP model is first validated experimentally in five subjects. Then, its real-time efficacy is assessed using dynamic data streamed on-line for one selected subject

    The sensor-based biomechanical risk assessment at the base of the need for revising of standards for human ergonomics

    Get PDF
    Due to the epochal changes introduced by “Industry 4.0”, it is getting harder to apply the varying approaches for biomechanical risk assessment of manual handling tasks used to prevent work-related musculoskeletal disorders (WMDs) considered within the International Standards for ergonomics. In fact, the innovative human–robot collaboration (HRC) systems are widening the number of work motor tasks that cannot be assessed. On the other hand, new sensor-based tools for biomechanical risk assessment could be used for both quantitative “direct instrumental evaluations” and “rating of standard methods”, allowing certain improvements over traditional methods. In this light, this Letter aims at detecting the need for revising the standards for human ergonomics and biomechanical risk assessment by analyzing the WMDs prevalence and incidence; additionally, the strengths and weaknesses of traditional methods listed within the International Standards for manual handling activities and the next challenges needed for their revision are considered. As a representative example, the discussion is referred to the lifting of heavy loads where the revision should include the use of sensor-based tools for biomechanical risk assessment during lifting performed with the use of exoskeletons, by more than one person (team lifting) and when the traditional methods cannot be applied. The wearability of sensing and feedback sensors in addition to human augmentation technologies allows for increasing workers’ awareness about possible risks and enhance the effectiveness and safety during the execution of in many manual handling activities

    RiskStructures : A Design Algebra for Risk-Aware Machines

    Get PDF
    Machines, such as mobile robots and delivery drones, incorporate controllers responsible for a task while handling risk (e.g. anticipating and mitigating hazards; and preventing and alleviating accidents). We refer to machines with this capability as risk-aware machines. Risk awareness includes robustness and resilience, and complicates monitoring (i.e., introspection, sensing, prediction), decision making, and control. From an engineering perspective, risk awareness adds a range of dependability requirements to system assurance. Such assurance mandates a correct-by-construction approach to controller design, based on mathematical theory. We introduce RiskStructures, an algebraic framework for risk modelling intended to support the design of safety controllers for risk-aware machines. Using the concept of a risk factor as a modelling primitive, this framework provides facilities to construct, examine, and assure these controllers. We prove desirable algebraic properties of these facilities, and demonstrate their applicability by using them to specify key aspects of safety controllers for risk-aware automated driving and collaborative robots
    • …
    corecore