61 research outputs found
Desenvolvimento de um sistema robótico de dois braços para imitação gestual
Mestrado em Engenharia de Automação IndustrialA investigação dedicada à área de robótica tem vindo a desempenhar um
papel fundamental no que diz respeito à interação humano-robot. Esta interação
tem evoluído em aspetos como reconhecimento de voz, caminhar, imitação
gestual, exploração e trabalho cooperativo. A aprendizagem por imitação
traz várias vantagens em relação aos métodos de programação convencionais,
pois possibilita a transferência de novas habilidades ao robot através de
uma interação mais natural. O trabalho desenvolvido pretende a implementação
de um sistema robótico para imitação gestual que sirva como base para
o desenvolvimento de um sistema capaz de aprender recorrendo à imitação
gestual de um humano. As demonstrações foram adquiridas recorrendo a um
sistema de captura de movimento humano baseado no sensor Kinect. O sistema
desenvolvido permite reproduzir os movimentos capturados num robot
humanoide composto por dois braços Cyton Gamma 1500 em tempo real,
respeitando as restrições físicas e de espaço de trabalho do robot bem como
prevenindo possíveis colisões. Os braços robóticos foram fixados numa estrutura
mecânica, similar à estrutura do torso humano, desenvolvida para o
efeito. Foi estudada a cinemática do manipulador com o objetivo de desenvolver
algoritmos base de controlo. Estes foram desenvolvidos de forma modular
de modo a criar um sistema que permite vários modos de funcionamento independentes.
Foram elaborados testes experimentais com o intuito de avaliar
o desempenho do sistema em diferentes situações. Estas estão relacionadas
com limitações físicas associadas à imitação, como por exemplo: limites
físicos das juntas, limites de velocidade, limites do espaço de trabalho, configurações
singulares e colisões. Foram assim estudadas e implementadas
soluções que permitem resolver estas situações.Research in robotics has been playing an important role in human-robot interaction
field. This interaction has evolved in several areas such as speech
recognition, walking, gesture imitation, exploring and cooperative work. Imitation
learning has several advantages over conventional programming methods
because it allows the transfer of new skills to the robot through a more natural
interaction. The work aims to implement a dual-arm manipulation system
able to reproduce human gestures in real-time. The robotic arms are fixed to
a mechanical structure similar to the human torso developed for this purpose.
The demonstrations are obtained from a human motion capture system based
on the Kinect sensor. The captured movements are reproduced in a two Cyton
Gamma 1500 robotic arms assuming physical constraints and workspace
limits, as well as avoiding self-collisions and singular configurations. The kinematics
study of the robot arms provides the basis for the implementation of
kinematics control algorithms. The software development is supported by the
Robot Operating System (ROS) framework following the philosophy of modular
and open-ended development. Several experimental tests are conducted
to validate the proposed solutions and to evaluate the system’s performance
in different situations, including those related with joints physical limits, workspace
limits, collisions and singularity avoidance
Real-time gestural control of robot manipulator through Deep Learning human-pose inference
International audienceWith the raise of collaborative robots, human-robot interaction needs to be as natural as possible. In this work, we present a framework for real-time continuous motion control of a real collabora-tive robot (cobot) from gestures captured by an RGB camera. Through deep learning existing techniques, we obtain human skeletal pose information both in 2D and 3D. We use it to design a controller that makes the robot mirror in real-time the movements of a human arm or hand
Scaled Autonomy for Networked Humanoids
Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework.
The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment.
Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC
A Posture Sequence Learning System for an Anthropomorphic Robotic Hand
The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator
Programming by Demonstration on Riemannian Manifolds
This thesis presents a Riemannian approach to Programming by Demonstration (PbD).
It generalizes an existing PbD method from Euclidean manifolds to Riemannian manifolds.
In this abstract, we review the objectives, methods and contributions of the presented
approach.
OBJECTIVES
PbD aims at providing a user-friendly method for skill transfer between human and
robot. It enables a user to teach a robot new tasks using few demonstrations. In order
to surpass simple record-and-replay, methods for PbD need to \u2018understand\u2019 what to
imitate; they need to extract the functional goals of a task from the demonstration data.
This is typically achieved through the application of statisticalmethods.
The variety of data encountered in robotics is large. Typical manipulation tasks involve
position, orientation, stiffness, force and torque data. These data are not solely
Euclidean. Instead, they originate from a variety of manifolds, curved spaces that are
only locally Euclidean. Elementary operations, such as summation, are not defined on
manifolds. Consequently, standard statistical methods are not well suited to analyze
demonstration data that originate fromnon-Euclidean manifolds. In order to effectively
extract what-to-imitate, methods for PbD should take into account the underlying geometry
of the demonstration manifold; they should be geometry-aware.
Successful task execution does not solely depend on the control of individual task
variables. By controlling variables individually, a task might fail when one is perturbed
and the others do not respond. Task execution also relies on couplings among task variables.
These couplings describe functional relations which are often called synergies. In
order to understand what-to-imitate, PbDmethods should be able to extract and encode
synergies; they should be synergetic.
In unstructured environments, it is unlikely that tasks are found in the same scenario
twice. The circumstances under which a task is executed\u2014the task context\u2014are more
likely to differ each time it is executed. Task context does not only vary during task execution,
it also varies while learning and recognizing tasks. To be effective, a robot should
be able to learn, recognize and synthesize skills in a variety of familiar and unfamiliar
contexts; this can be achieved when its skill representation is context-adaptive.
THE RIEMANNIAN APPROACH
In this thesis, we present a skill representation that is geometry-aware, synergetic and
context-adaptive. The presented method is probabilistic; it assumes that demonstrations
are samples from an unknown probability distribution. This distribution is approximated
using a Riemannian GaussianMixtureModel (GMM).
Instead of using the \u2018standard\u2019 Euclidean Gaussian, we rely on the Riemannian Gaussian\u2014
a distribution akin the Gaussian, but defined on a Riemannian manifold. A Riev
mannian manifold is a manifold\u2014a curved space which is locally Euclidean\u2014that provides
a notion of distance. This notion is essential for statistical methods as such methods
rely on a distance measure. Examples of Riemannian manifolds in robotics are: the
Euclidean spacewhich is used for spatial data, forces or torques; the spherical manifolds,
which can be used for orientation data defined as unit quaternions; and Symmetric Positive
Definite (SPD) manifolds, which can be used to represent stiffness and manipulability.
The Riemannian Gaussian is intrinsically geometry-aware. Its definition is based on
the geometry of the manifold, and therefore takes into account the manifold curvature.
In robotics, the manifold structure is often known beforehand. In the case of PbD, it follows
from the structure of the demonstration data. Like the Gaussian distribution, the
Riemannian Gaussian is defined by a mean and covariance. The covariance describes
the variance and correlation among the state variables. These can be interpreted as local
functional couplings among state variables: synergies. This makes the Riemannian
Gaussian synergetic. Furthermore, information encoded in multiple Riemannian Gaussians
can be fused using the Riemannian product of Gaussians. This feature allows us to
construct a probabilistic context-adaptive task representation.
CONTRIBUTIONS
In particular, this thesis presents a generalization of existing methods of PbD, namely
GMM-GMR and TP-GMM. This generalization involves the definition ofMaximum Likelihood
Estimate (MLE), Gaussian conditioning and Gaussian product for the Riemannian
Gaussian, and the definition of ExpectationMaximization (EM) and GaussianMixture
Regression (GMR) for the Riemannian GMM. In this generalization, we contributed
by proposing to use parallel transport for Gaussian conditioning. Furthermore, we presented
a unified approach to solve the aforementioned operations using aGauss-Newton
algorithm. We demonstrated how synergies, encoded in a Riemannian Gaussian, can be
transformed into synergetic control policies using standard methods for LinearQuadratic
Regulator (LQR). This is achieved by formulating the LQR problem in a (Euclidean) tangent
space of the Riemannian manifold. Finally, we demonstrated how the contextadaptive
Task-Parameterized Gaussian Mixture Model (TP-GMM) can be used for context
inference\u2014the ability to extract context from demonstration data of known tasks.
Our approach is the first attempt of context inference in the light of TP-GMM. Although
effective, we showed that it requires further improvements in terms of speed and reliability.
The efficacy of the Riemannian approach is demonstrated in a variety of scenarios.
In shared control, the Riemannian Gaussian is used to represent control intentions of a
human operator and an assistive system. Doing so, the properties of the Gaussian can
be employed to mix their control intentions. This yields shared-control systems that
continuously re-evaluate and assign control authority based on input confidence. The
context-adaptive TP-GMMis demonstrated in a Pick & Place task with changing pick and
place locations, a box-taping task with changing box sizes, and a trajectory tracking task
typically found in industr
Smart Camera Robotic Assistant for Laparoscopic Surgery
The cognitive architecture also includes learning mechanisms to adapt the behavior of the robot to the different ways of working of surgeons, and to improve the robot behavior through experience, in a similar way as a human assistant would do.
The theoretical concepts of this dissertation have been validated both through in-vitro experimentation in the labs of medical robotics of the University of Malaga and through in-vivo experimentation with pigs in the IACE Center (Instituto Andaluz de Cirugía Experimental), performed by expert surgeons.In the last decades, laparoscopic surgery has become a daily practice in operating rooms worldwide, which evolution is tending towards less invasive techniques. In this scenario, robotics has found a wide field of application, from slave robotic systems that replicate the movements of the surgeon to autonomous robots able to assist the surgeon in certain maneuvers or to perform autonomous surgical tasks. However, these systems require the direct supervision of the surgeon, and its capacity of making decisions and adapting to dynamic environments is very limited.
This PhD dissertation presents the design and implementation of a smart camera robotic assistant to collaborate with the surgeon in a real surgical environment. First, it presents the design of a novel camera robotic assistant able to augment the capacities of current vision systems. This robotic assistant is based on an intra-abdominal camera robot, which is completely inserted into the patient’s abdomen and it can be freely moved along the abdominal cavity by means of magnetic interaction with an external magnet. To provide the camera with the autonomy of motion, the external magnet is coupled to the end effector of a robotic arm, which controls the shift of the camera robot along the abdominal wall. This way, the robotic assistant proposed in this dissertation has six degrees of freedom, which allow providing a wider field of view compared to the traditional vision systems, and also to have different perspectives of the operating area.
On the other hand, the intelligence of the system is based on a cognitive architecture specially designed for autonomous collaboration with the surgeon in real surgical environments. The proposed architecture simulates the behavior of a human assistant, with a natural and intuitive human-robot interface for the communication between the robot and the surgeon
Design, modeling and implementation of a soft robotic neck for humanoid robots
Mención Internacional en el título de doctorSoft humanoid robotics is an emerging field that combines the flexibility and safety of soft
robotics with the form and functionality of humanoid robotics. This thesis explores the potential
for collaboration between these two fields with a focus on the development of soft joints for the
humanoid robot TEO. The aim is to improve the robot’s adaptability and movement, which are
essential for an efficient interaction with its environment.
The research described in this thesis involves the development of a simple and easily transportable
soft robotic neck for the robot, based on a 2 Degree of Freedom (DOF) Cable Driven
Parallel Mechanism (CDPM). For its final integration into TEO, the proposed design is later
refined, resulting in an efficiently scaled prototype able to face significant payloads.
The nonlinear behaviour of the joints, due mainly to the elastic nature of their soft links,
makes their modeling a challenging issue, which is addressed in this thesis from two perspectives:
first, the direct and inverse kinematic models of the soft joints are analytically studied,
based on CDPM mathematical models; second, a data-driven system identification is performed
based on machine learning techniques. Both approaches are deeply studied and compared, both
in simulation and experimentally.
In addition to the soft neck, this thesis also addresses the design and prototyping of a soft
arm capable of handling external loads. The proposed design is also tendon-driven and has a
morphology with two main bending configurations, which provides more versatility compared
to the soft neck.
In summary, this work contributes to the growing field of soft humanoid robotics through
the development of soft joints and their application to the humanoid robot TEO, showcasing the
potential of soft robotics to improve the adaptability, flexibility, and safety of humanoid robots.
The development of these soft joints is a significant achievement and the research presented in this thesis paves the way for further exploration and development in this field.La robótica humanoide blanda es un campo emergente que combina la flexibilidad y seguridad
de la robótica blanda con la forma y funcionalidad de la robótica humanoide. Esta
tesis explora el potencial de colaboración entre estos dos campos centrándose en el desarrollo
de una articulación blanda para el cuello del robot humanoide TEO. El objetivo es mejorar la
adaptabilidad y el movimiento del robot, esenciales para una interacción eficaz con su entorno.
La investigación descrita en esta tesis consiste en el desarrollo de un prototipo sencillo
y fácilmente transportable de cuello blando para el robot, basado en un mecanismo paralelo
actuado por cable de 2 grados de libertad. Para su integración final en TEO, el diseño propuesto
es posteriormente refinado, resultando en un prototipo eficientemente escalado capaz de manejar
cargas significativas.
El comportamiemto no lineal de estas articulaciones, debido fundamentalmente a la naturaleza
elástica de sus eslabones blandos, hacen de su modelado un gran reto, que en esta tesis
se aborda desde dos perspectivas diferentes: primero, los modelos cinemáticos directo e inverso
de las articulaciones blandas se estudian analíticamente, basándose en modelos matemáticos de
mecanismos paralelos actuados por cable; segundo, se aborda el problema de la identificación
del sistema mediante técnicas basadas en machine learning. Ambas propuestas se estudian y
comparan en profundidad, tanto en simulación como experimentalmente.
Además del cuello blando, esta tesis también aborda el diseño de un brazo robótico blando
capaz de manejar cargas externas. El diseño propuesto está igualmente basado en accionamiento
por tendones y tiene una morfología con dos configuraciones principales de flexión, lo que
proporciona una mayor versatilidad en comparación con el cuello robótico blando.
En resumen, este trabajo contribuye al creciente campo de la robótica humanoide blanda
mediante el desarrollo de articulaciones blandas y su aplicación al robot humanoide TEO, mostrando el potencial de la robótica blanda para mejorar la adaptabilidad, flexibilidad y seguridad
de los robots humanoides. El desarrollo de estas articulaciones es una contribución
significativa y la investigación presentada en esta tesis allana el camino hacia nuevos desarrollos
y retos en este campo.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidenta: Cecilia Elisabet García Cena.- Secretario: Dorin Sabin Copaci.- Vocal: Martin Fodstad Stole
Study of Control Strategies for Robot Ball Catching
La tesi riguarda lo studio di un possibile scenario per la cattura di una palla con un braccio robotico usando tecnologie disponibili e considerando due problemi principali: studiare differenti strategie di controllo per il braccio robotico al fine di catturare la palla (controllo predittivo e prospettivo); implementare un simulatore in ROS che simula il robot reale, includendo un sistema di visione per riconoscere e tracciare la palla usando il sensore Microsoft Kinect, con diverse simulazion
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
Reasoning about space for human-robot interaction
L'interaction Homme-Robot est un domaine de recherche qui se développe de manière exponentielle durant ces dernières années, ceci nous procure de nouveaux défis au raisonnement géométrique du robot et au partage d'espace. Le robot pour accomplir une tâche, doit non seulement raisonner sur ses propres capacités, mais également prendre en considération la perception humaine, c'est à dire "Le robot doit se placer du point de vue de l'humain".
Chez l'homme, la capacité de prise de perspective visuelle commence à se manifester à partir du 24ème mois. Cette capacité est utilisée pour déterminer si une autre personne peut voir un objet ou pas. La mise en place de ce genre de capacités sociales améliorera les capacités cognitives du robot et aidera le robot pour une meilleure interaction avec les hommes.
Dans ce travail, nous présentons un mécanisme de raisonnement spatial de point de vue géométrique qui utilise des concepts psychologiques de la "prise de perspective" et "de la rotation mentale" dans deux cadres généraux:
- La planification de mouvement pour l'interaction homme-robot: le robot utilise "la prise de perspective égocentrique" pour évaluer plusieurs configurations où le robot peut effectuer différentes tâches d'interaction.
- Une interaction face à face entre l'homme et le robot : le robot emploie la prise de point de vue de l'humain comme un outil géométrique pour comprendre l'attention et l'intention humaine afin d'effectuer des tâches coopératives.Human Robot Interaction is a research area that is growing exponentially in last years. This fact brings new challenges to the robot's geometric reasoning and space sharing abilities. The robot should not only reason on its own capacities but also consider the actual situation by looking from human's eyes, thus "putting itself into human's perspective".
In humans, the "visual perspective taking" ability begins to appear by 24 months of age and is used to determine if another person can see an object or not. The implementation of this kind of social abilities will improve the robot's cognitive capabilities and will help the robot to perform a better interaction with human beings.
In this work, we present a geometric spatial reasoning mechanism that employs psychological concepts of "perspective taking" and "mental rotation" in two general frameworks:
- Motion planning for human-robot interaction: where the robot uses "egocentric perspective taking" to evaluate several configurations where the robot is able to perform different tasks of interaction.
- A face-to-face human-robot interaction: where the robot uses perspective taking of the human as a geometric tool to understand the human attention and intention in order to perform cooperative tasks
- …