99 research outputs found
An adaptive compliance Hierarchical Quadratic Programming controller for ergonomic human–robot collaboration
This paper proposes a novel Augmented Hierarchical Quadratic Programming (AHQP) framework for multi-tasking control in Human-Robot Collaboration (HRC) which integrates human-related parameters to optimize ergonomics. The aim is to combine parameters that are typical of both industrial applications (e.g. cycle times, productivity) and human comfort (e.g. ergonomics, preference), to identify an optimal trade-off. The augmentation aspect avoids the dependency from a fixed end-effector reference trajectory, which becomes part of the optimization variables and can be used to define a feasible workspace region in which physical interaction can occur. We then demonstrate that the integration of the proposed AHQP in HRC permits the addition of human ergonomics and preference. To achieve this, we develop a human ergonomics function based on the mapping of an ergonomics score, compatible with AHQP formulation. This allows to identify at control level the optimal Cartesian pose that satisfies the active objectives and constraints, that are now linked to human ergonomics. In addition, we build an adaptive compliance framework that
integrates both aspects of human preferences and intentions, which are finally tested in several collaborative experiments using the redundant MOCA robot. Overall, we achieve improved human ergonomics and health conditions, aiming at the potential reduction of work-related musculoskeletal disorders
Design of an Energy-Aware Cartesian Impedance Controller for Collaborative Disassembly
Human-robot collaborative disassembly is an emerging trend in the sustainable
recycling process of electronic and mechanical products. It requires the use of
advanced technologies to assist workers in repetitive physical tasks and deal
with creaky and potentially damaged components. Nevertheless, when
disassembling worn-out or damaged components, unexpected robot behaviors may
emerge, so harmless and symbiotic physical interaction with humans and the
environment becomes paramount. This work addresses this challenge at the
control level by ensuring safe and passive behaviors in unplanned interactions
and contact losses. The proposed algorithm capitalizes on an energy-aware
Cartesian impedance controller, which features energy scaling and damping
injection, and an augmented energy tank, which limits the power flow from the
controller to the robot. The controller is evaluated in a real-world flawed
unscrewing task with a Franka Emika Panda and is compared to a standard
impedance controller and a hybrid force-impedance controller. The results
demonstrate the high potential of the algorithm in human-robot collaborative
disassembly tasks.Comment: 7 pages, 6 figures, presented at the 2023 IEEE International
Conference on Robotics and Automation (ICRA). Video available at
https://www.youtube-nocookie.com/embed/SgYFHMlEl0
Automatic Interaction and Activity Recognition from Videos of Human Manual Demonstrations with Application to Anomaly Detection
This paper presents a new method to describe spatio-temporal relations
between objects and hands, to recognize both interactions and activities within
video demonstrations of manual tasks. The approach exploits Scene Graphs to
extract key interaction features from image sequences, encoding at the same
time motion patterns and context. Additionally, the method introduces an
event-based automatic video segmentation and clustering, which allows to group
similar events, detecting also on the fly if a monitored activity is executed
correctly. The effectiveness of the approach was demonstrated in two
multi-subject experiments, showing the ability to recognize and cluster
hand-object and object-object interactions without prior knowledge of the
activity, as well as matching the same activity performed by different
subjects.Comment: 8 pages, 8 figures, submitted to IEEE RAS International Symposium on
Robot and Human Interactive Communication (RO-MAN), for associated video see
https://youtu.be/Ftu_EHAtH4
Automatic Interaction and Activity Recognition from Videos of Human Manual Demonstrations with Application to Anomaly Detection
This paper presents a new method to describe spatio-temporal relations
between objects and hands, to recognize both interactions and activities within
video demonstrations of manual tasks. The approach exploits Scene Graphs to
extract key interaction features from image sequences, encoding at the same
time motion patterns and context. Additionally, the method introduces an
event-based automatic video segmentation and clustering, which allows to group
similar events, detecting also on the fly if a monitored activity is executed
correctly. The effectiveness of the approach was demonstrated in two
multi-subject experiments, showing the ability to recognize and cluster
hand-object and object-object interactions without prior knowledge of the
activity, as well as matching the same activity performed by different
subjects
Human-Like Impedance and Minimum Effort Control for Natural and Efficient Manipulation
Humans incorporate and switch between learnt
neuromotor strategies while performing complex tasks. Towards
this purpose, kinematic redundancy is exploited in order
to achieve optimized performance. Inspired by the superior
motor skills of humans, in this paper, we investigate a combined
free motion and interaction controller in a certain class of
robotic manipulation. In this bimodal controller, kinematic
degrees of redundancy are adapted according to task-suitable
dynamic costs. The proposed algorithm attributes high priority
to minimum-effort controller while performing point to
point free space movements. Once the robot comes in contact
with the environment, the Tele-Impedance, common mode
and configuration dependent stiffness (CMS-CDS) controller
will replicate the human’s estimated endpoint stiffness and
measured equilibrium position profiles in the slave robotic
arm, in real-time. Results of the proposed controller in contact
with the environment are compared with the ones derived
from Tele-Impedance implemented using torque based classical
Cartesian stiffness control. The minimum-effort and interaction
performance achieved highlights the possibility of adopting
human-like and sophisticated strategies in humanoid robots or
the ones with adequate degrees of redundancy, in order to
accomplish tasks in a certain class of robotic manipulatio
Pushing in the Dark: A Reactive Pushing Strategy for Mobile Robots Using Tactile Feedback
For mobile robots, navigating cluttered or dynamic environments often
necessitates non-prehensile manipulation, particularly when faced with objects
that are too large, irregular, or fragile to grasp. The unpredictable behavior
and varying physical properties of these objects significantly complicate
manipulation tasks. To address this challenge, this manuscript proposes a novel
Reactive Pushing Strategy. This strategy allows a mobile robot to dynamically
adjust its base movements in real-time to achieve successful pushing maneuvers
towards a target location. Notably, our strategy adapts the robot motion based
on changes in contact location obtained through the tactile sensor covering the
base, avoiding dependence on object-related assumptions and its modeled
behavior. The effectiveness of the Reactive Pushing Strategy was initially
evaluated in the simulation environment, where it significantly outperformed
the compared baseline approaches. Following this, we validated the proposed
strategy through real-world experiments, demonstrating the robot capability to
push objects to the target points located in the entire vicinity of the robot.
In both simulation and real-world experiments, the object-specific properties
(shape, mass, friction, inertia) were altered along with the changes in target
locations to assess the robustness of the proposed method comprehensively.Comment: 8 pages, 7 figures, submitted to IEEE Robotics and Automation
Letters, for associated video, see https://youtu.be/IuGxlNe246
A Unified Architecture for Dynamic Role Allocation and Collaborative Task Planning in Mixed Human-Robot Teams
The growing deployment of human-robot collaborative processes in several
industrial applications, such as handling, welding, and assembly, unfolds the
pursuit of systems which are able to manage large heterogeneous teams and, at
the same time, monitor the execution of complex tasks. In this paper, we
present a novel architecture for dynamic role allocation and collaborative task
planning in a mixed human-robot team of arbitrary size. The architecture
capitalizes on a centralized reactive and modular task-agnostic planning method
based on Behavior Trees (BTs), in charge of actions scheduling, while the
allocation problem is formulated through a Mixed-Integer Linear Program (MILP),
that assigns dynamically individual roles or collaborations to the agents of
the team. Different metrics used as MILP cost allow the architecture to favor
various aspects of the collaboration (e.g. makespan, ergonomics, human
preferences). Human preference are identified through a negotiation phase, in
which, an human agent can accept/refuse to execute the assigned task.In
addition, bilateral communication between humans and the system is achieved
through an Augmented Reality (AR) custom user interface that provides intuitive
functionalities to assist and coordinate workers in different action phases.
The computational complexity of the proposed methodology outperforms literature
approaches in industrial sized jobs and teams (problems up to 50 actions and 20
agents in the team with collaborations are solved within 1 s). The different
allocated roles, as the cost functions change, highlights the flexibility of
the architecture to several production requirements. Finally, the subjective
evaluation demonstrating the high usability level and the suitability for the
targeted scenario.Comment: 18 pages, 20 figures, 2nd round review at Transaction on Robotic
A reduced-complexity description of arm endpoint stiffness with applications to teleimpedance control
Effective and stable execution of a remote manipulation task in an uncertain environment requires that the task force and position trajectories of the slave robot be appropriately commanded. To achieve this goal, in teleimpedance control, a reference command which consists of the stiffness and position profiles of the master is computed and realized by the compliant slave robot in real-time. This highlights the need for a suitable and computationally efficient tracking of the human limb stiffness profile in real-time. In this direction, based on the observations in human neuromotor control which give evidence on the predominant use of the arm configuration in directional adjustments of the endpoint stiffness profile, and the role of muscular co-activations which contribute to a coordinated regulation of the task stiffness in all directions, we propose a novel and computationally efficient model of the arm endpoint stiffness behaviour. Real-time tracking of the human arm kinematics is achieved using an arm triangle monitored by three markers placed at the shoulder, elbow and wrist level. In addition, a co-contraction index is defined using muscular activities of a dominant antagonistic muscle pair. Calibration and identification of the model parameters are carried out experimentally, using perturbation-based arm endpoint stiffness measurements in different arm configurations and co-contraction levels of the chosen muscles. Results of this study suggest that the proposed model enables the master to naturally execute a remote task by modulating the direction of the major axes of the endpoint stiffness and its volume using arm configuration and the co-activation of the involved muscles, respectively
Robot Trajectory Adaptation to Optimise the Trade-off between Human Cognitive Ergonomics and Workplace Productivity in Collaborative Tasks
In hybrid industrial environments, workers' comfort and positive perception of safety are essential requirements for successful acceptance and usage of collaborative robots. This paper proposes a novel human-robot interaction framework in which the robot behaviour is adapted online according to the operator's cognitive workload and stress. The method exploits the generation of B-spline trajectories in the joint space and formulation of a multi-objective optimisation problem to online adjust the total execution time and smoothness of the robot trajectories. The former ensures human efficiency and productivity of the workplace, while the latter contributes to safeguarding the user's comfort and cognitive ergonomics. The performance of the proposed framework was evaluated in a typical industrial task. Results demonstrated its capability to enhance the productivity of the human-robot dyad while mitigating the cognitive workload induced in the worker
Enhancing Human-Robot Collaboration Transportation through Obstacle-Aware Vibrotactile Feedback
Transporting large and heavy objects can benefit from Human-Robot
Collaboration (HRC), increasing the contribution of robots to our daily tasks
and reducing the risk of injuries to the human operator. This approach usually
posits the human collaborator as the leader, while the robot has the follower
role. Hence, it is essential for the leader to be aware of the environmental
situation. However, when transporting a large object, the operator's
situational awareness can be compromised as the object may occlude different
parts of the environment. This paper proposes a novel haptic-based
environmental awareness module for a collaborative transportation framework
that informs the human operator about surrounding obstacles. The robot uses two
LIDARs to detect the obstacles in the surroundings. The warning module alerts
the operator through a haptic belt with four vibrotactile devices that provide
feedback about the location and proximity of the obstacles. By enhancing the
operator's awareness of the surroundings, the proposed module improves the
safety of the human-robot team in co-carrying scenarios by preventing
collisions. Experiments with two non-expert subjects in two different
situations are conducted. The results show that the human partner can
successfully lead the co-transportation system in an unknown environment with
hidden obstacles thanks to the haptic feedback.Comment: 6 pages, 5 figures, for associated video, see this
https://youtu.be/UABeGPIIrH
- …