16 research outputs found
Automatic Interaction and Activity Recognition from Videos of Human Manual Demonstrations with Application to Anomaly Detection
This paper presents a new method to describe spatio-temporal relations
between objects and hands, to recognize both interactions and activities within
video demonstrations of manual tasks. The approach exploits Scene Graphs to
extract key interaction features from image sequences, encoding at the same
time motion patterns and context. Additionally, the method introduces an
event-based automatic video segmentation and clustering, which allows to group
similar events, detecting also on the fly if a monitored activity is executed
correctly. The effectiveness of the approach was demonstrated in two
multi-subject experiments, showing the ability to recognize and cluster
hand-object and object-object interactions without prior knowledge of the
activity, as well as matching the same activity performed by different
subjects.Comment: 8 pages, 8 figures, submitted to IEEE RAS International Symposium on
Robot and Human Interactive Communication (RO-MAN), for associated video see
https://youtu.be/Ftu_EHAtH4
Design of an Energy-Aware Cartesian Impedance Controller for Collaborative Disassembly
Human-robot collaborative disassembly is an emerging trend in the sustainable
recycling process of electronic and mechanical products. It requires the use of
advanced technologies to assist workers in repetitive physical tasks and deal
with creaky and potentially damaged components. Nevertheless, when
disassembling worn-out or damaged components, unexpected robot behaviors may
emerge, so harmless and symbiotic physical interaction with humans and the
environment becomes paramount. This work addresses this challenge at the
control level by ensuring safe and passive behaviors in unplanned interactions
and contact losses. The proposed algorithm capitalizes on an energy-aware
Cartesian impedance controller, which features energy scaling and damping
injection, and an augmented energy tank, which limits the power flow from the
controller to the robot. The controller is evaluated in a real-world flawed
unscrewing task with a Franka Emika Panda and is compared to a standard
impedance controller and a hybrid force-impedance controller. The results
demonstrate the high potential of the algorithm in human-robot collaborative
disassembly tasks.Comment: 7 pages, 6 figures, presented at the 2023 IEEE International
Conference on Robotics and Automation (ICRA). Video available at
https://www.youtube-nocookie.com/embed/SgYFHMlEl0
Automatic Interaction and Activity Recognition from Videos of Human Manual Demonstrations with Application to Anomaly Detection
This paper presents a new method to describe spatio-temporal relations
between objects and hands, to recognize both interactions and activities within
video demonstrations of manual tasks. The approach exploits Scene Graphs to
extract key interaction features from image sequences, encoding at the same
time motion patterns and context. Additionally, the method introduces an
event-based automatic video segmentation and clustering, which allows to group
similar events, detecting also on the fly if a monitored activity is executed
correctly. The effectiveness of the approach was demonstrated in two
multi-subject experiments, showing the ability to recognize and cluster
hand-object and object-object interactions without prior knowledge of the
activity, as well as matching the same activity performed by different
subjects
A Unified Architecture for Dynamic Role Allocation and Collaborative Task Planning in Mixed Human-Robot Teams
The growing deployment of human-robot collaborative processes in several
industrial applications, such as handling, welding, and assembly, unfolds the
pursuit of systems which are able to manage large heterogeneous teams and, at
the same time, monitor the execution of complex tasks. In this paper, we
present a novel architecture for dynamic role allocation and collaborative task
planning in a mixed human-robot team of arbitrary size. The architecture
capitalizes on a centralized reactive and modular task-agnostic planning method
based on Behavior Trees (BTs), in charge of actions scheduling, while the
allocation problem is formulated through a Mixed-Integer Linear Program (MILP),
that assigns dynamically individual roles or collaborations to the agents of
the team. Different metrics used as MILP cost allow the architecture to favor
various aspects of the collaboration (e.g. makespan, ergonomics, human
preferences). Human preference are identified through a negotiation phase, in
which, an human agent can accept/refuse to execute the assigned task.In
addition, bilateral communication between humans and the system is achieved
through an Augmented Reality (AR) custom user interface that provides intuitive
functionalities to assist and coordinate workers in different action phases.
The computational complexity of the proposed methodology outperforms literature
approaches in industrial sized jobs and teams (problems up to 50 actions and 20
agents in the team with collaborations are solved within 1 s). The different
allocated roles, as the cost functions change, highlights the flexibility of
the architecture to several production requirements. Finally, the subjective
evaluation demonstrating the high usability level and the suitability for the
targeted scenario.Comment: 18 pages, 20 figures, 2nd round review at Transaction on Robotic
A Passive Variable Impedance Control Strategy with Viscoelastic Parameters Estimation of Soft Tissues for Safe Ultrasonography
In the context of telehealth, robotic approaches have proven a valuable
solution to in-person visits in remote areas, with decreased costs for patients
and infection risks. In particular, in ultrasonography, robots have the
potential to reproduce the skills required to acquire high-quality images while
reducing the sonographer's physical efforts. In this paper, we address the
control of the interaction of the probe with the patient's body, a critical
aspect of ensuring safe and effective ultrasonography. We introduce a novel
approach based on variable impedance control, allowing real-time optimisation
of a compliant controller parameters during ultrasound procedures. This
optimisation is formulated as a quadratic programming problem and incorporates
physical constraints derived from viscoelastic parameter estimations. Safety
and passivity constraints, including an energy tank, are also integrated to
minimise potential risks during human-robot interaction. The proposed method's
efficacy is demonstrated through experiments on a patient dummy torso,
highlighting its potential for achieving safe behaviour and accurate force
control during ultrasound procedures, even in cases of contact loss.Comment: 7 pages, 7 figures, submitted to ICRA 202
When Prolog meets generative models: a new approach for managing knowledge and planning in robotic applications
In this paper, we propose a robot oriented knowledge management system based
on the use of the Prolog language. Our framework hinges on a special
organisation of knowledge base that enables: 1. its efficient population from
natural language texts using semi-automated procedures based on Large Language
Models, 2. the bumpless generation of temporal parallel plans for multi-robot
systems through a sequence of transformations, 3. the automated translation of
the plan into an executable formalism (the behaviour trees). The framework is
supported by a set of open source tools and is shown on a realistic
application.Comment: 7 pages, 4 figures, submitted to ICRA 202
Ergonomic and Worker-Centric Human-Robot Collaboration: Strategies, Interfaces and Controllers
An emergent trend of flexible production lines is represented by the attempt of pairing human workers with cobots. Robotic agents offer high-precision motions and considerable power capacity, whereas human workers can complement the robots with their superior cognitive capabilities and task understanding. The greatest advantage of this match is achievable not only allowing cobots to work side-by-side with humans coexistence but envisioning fully cooperative and, if needed, even collaborative scenarios.
Several studies showed that a safe physical coexistence with cobot is not only feasible but could also significantly improve the production process.
The researchers' goal in this field consists in providing the cobot with intelligent algorithms and interfaces that allow a fruitful collaboration and physical interaction.
The development of a collaborative solution can be split into different levels: at the task level, the specific production process is analysed and decomposed into a sequence of atomic actions. The decomposition is independent of the agents that compose the mixed team. The nature of the team affects the team level, whose strategies tackle problems like role allocation, that defines which agent is in charge of each action, and robotic actions planning and scheduling. At the agent level, we need to ensure, together with robot motion control strategy, agents coordination and intuitive interactions.
This thesis aims to face these open problems, focusing in the strategies, interfaces and controllers for Human-Robot Collaboration (HRC) in industrial environments, such as manufacturing and logistics
Low-cost Scalable People Tracking System for Human-Robot Collaboration in Industrial Environment
Human-robot collaboration is one of the key elements in the Industry 4.0 revolution, aiming to a close and direct collaboration between robots and human workers to reach higher productivity and improved ergonomics. The first step toward such kind of collaboration in the industrial context is the removal of physical safety barriers usually surrounding standard robotic cells, so that human workers can approach and directly collaborate with robots. Anyway, human safety must be granted avoiding possible collisions with the robot. In this work, we propose the use of a people tracking algorithm to monitor people moving around a robot manipulator and recognize when a person is too close to the robot while performing a task. The system is implemented by a camera network system positioned around the robot workspace, and thoroughly evaluated in different industry-like settings in terms of both tracking accuracy and detection delay
A Human-Aware Method to Plan Complex Cooperative and Autonomous Tasks using Behavior Trees
This paper proposes a novel human-aware method that generates robot plans for autonomous and human-robot cooperative tasks in industrial environments. We modify the standard Behavior Trees (BTs) formulation in order to take into account the action-related costs, and design suitable metrics and cost functions to account for the cooperation with a worker considering human availability, decisions, and ergonomics. The developed approach allows the robot to online adapt its plan to the human partner, by choosing the tasks that minimize the execution cost(s). Through simulations, we first tuned the weights of the cost function for a realistic scenario. Subsequently, the developed method is validated through a proof-of-concept experiment representing the boxing of 4 different objects. The results show that the proposed cost-based BTs, along with the defined costs, enable the robot to online react and plan new tasks according to the dynamic changes of the environment, in terms of human presence and intentions. Our results indicate that the proposed solution demonstrates high potential in increasing robot reactivity and flexibility while, at the same time, in optimizing the decision-making process according to human actions