1,461 research outputs found
Teams organization and performance analysis in autonomous human-robot teams
This paper proposes a theory of human control of robot teams based on considering how people coordinate across different task allocations. Our current work focuses on domains such as foraging in which robots perform largely independent tasks. The present study addresses the interaction between automation and organization of human teams in controlling large robot teams performing an Urban Search and Rescue (USAR) task. We identify three subtasks: perceptual search-visual search for victims, assistance-teleoperation to assist robot, and navigation-path planning and coordination. For the studies reported here, navigation was selected for automation because it involves weak dependencies among robots making it more complex and because it was shown in an earlier experiment to be the most difficult. This paper reports an extended analysis of the two conditions from a larger four condition study. In these two "shared pool" conditions Twenty four simulated robots were controlled by teams of 2 participants. Sixty paid participants (30 teams) were recruited to perform the shared pool tasks in which participants shared control of the 24 UGVs and viewed the same screens. Groups in the manual control condition issued waypoints to navigate their robots. In the autonomy condition robots generated their own waypoints using distributed path planning. We identify three self-organizing team strategies in the shared pool condition: joint control operators share full authority over robots, mixed control in which one operator takes primary control while the other acts as an assistant, and split control in which operators divide the robots with each controlling a sub-team. Automating path planning improved system performance. Effects of team organization favored operator teams who shared authority for the pool of robots. © 2010 ACM
Simultaneous Learning of Contact and Continuous Dynamics
Robotic manipulation can greatly benefit from the data efficiency,
robustness, and predictability of model-based methods if robots can quickly
generate models of novel objects they encounter. This is especially difficult
when effects like complex joint friction lack clear first-principles models and
are usually ignored by physics simulators. Further, numerically-stiff contact
dynamics can make common model-building approaches struggle. We propose a
method to simultaneously learn contact and continuous dynamics of a novel,
possibly multi-link object by observing its motion through contact-rich
trajectories. We formulate a system identification process with a loss that
infers unmeasured contact forces, penalizing their violation of physical
constraints and laws of motion given current model parameters. Our loss is
unlike prediction-based losses used in differentiable simulation. Using a new
dataset of real articulated object trajectories and an existing cube toss
dataset, our method outperforms differentiable simulation and end-to-end
alternatives with more data efficiency. See our project page for code,
datasets, and media: https://sites.google.com/view/continuous-contact-nets/homeComment: 13 pages, 5 figures. Accepted to Conference on Robot Learning (CoRL)
2023. Project webpage with code, datasets, media, and OpenReview link at
https://sites.google.com/view/continuous-contact-nets/hom
Robotic system for garment perception and manipulation
Mención Internacional en el título de doctorGarments are a key element of people’s daily lives, as many
domestic tasks -such as laundry-, revolve around them. Performing
such tasks, generally dull and repetitive, implies devoting
many hours of unpaid labor to them, that could be freed
through automation. But automation of such tasks has been traditionally
hard due to the deformable nature of garments, that
creates additional challenges to the already existing when performing
object perception and manipulation. This thesis presents
a Robotic System for Garment Perception and Manipulation
that intends to address these challenges.
The laundry pipeline as defined in this work is composed
by four independent -but sequential- tasks: hanging, unfolding,
ironing and folding. The aim of this work is the automation of
this pipeline through a robotic system able to work on domestic
environments as a robot household companion.
Laundry starts by washing the garments, that then need to
be dried, frequently by hanging them. As hanging is a complex
task requiring bimanipulation skills and dexterity, a simplified
approach is followed in this work as a starting point, by using
a deep convolutional neural network and a custom synthetic
dataset to study if a robot can predict whether a garment will
hang or not when dropped over a hanger, as a first step towards
a more complex controller.
After the garment is dry, it has to be unfolded to ease recognition
of its garment category for the next steps. The presented
model-less unfolding method uses only color and depth information
from the garment to determine the grasp and release
points of an unfolding action, that is repeated iteratively until
the garment is fully spread.
Before storage, wrinkles have to be removed from the garment.
For that purpose, a novel ironing method is proposed,
that uses a custom wrinkle descriptor to locate the most prominent
wrinkles and generate a suitable ironing plan. The method
does not require a precise control of the light conditions of
the scene, and is able to iron using unmodified ironing tools
through a force-feedback-based controller.
Finally, the last step is to fold the garment to store it. One
key aspect when folding is to perform the folding operation in a precise manner, as errors will accumulate when several
folds are required. A neural folding controller is proposed that
uses visual feedback of the current garment shape, extracted
through a deep neural network trained with synthetic data, to
accurately perform a fold.
All the methods presented to solve each of the laundry pipeline
tasks have been validated experimentally on different robotic
platforms, including a full-body humanoid robot.La ropa es un elemento clave en la vida diaria de las personas,
no sólo a la hora de vestir, sino debido también a que muchas
de las tareas domésticas que una persona debe realizar diariamente,
como hacer la colada, requieren interactuar con ellas.
Estas tareas, a menudo tediosas y repetitivas, obligan a invertir
una gran cantidad de horas de trabajo no remunerado en
su realización, las cuales podrían reducirse a través de su automatización.
Sin embargo, automatizar dichas tareas ha sido
tradicionalmente un reto, debido a la naturaleza deformable de
las prendas, que supone una dificultad añadida a las ya existentes
al llevar a cabo percepción y manipulación de objetos a
través de robots. Esta tesis presenta un sistema robótico orientado
a la percepción y manipulación de prendas, que pretende
resolver dichos retos.
La colada es una tarea doméstica compuesta de varias subtareas
que se llevan a cabo de manera secuencial. En este trabajo,
se definen dichas subtareas como: tender, desdoblar, planchar
y doblar. El objetivo de este trabajo es automatizar estas tareas
a través de un sistema robótico capaz de trabajar en entornos
domésticos, convirtiéndose en un asistente robótico doméstico.
La colada comienza lavando las prendas, las cuales han de
ser posteriormente secadas, generalmente tendiéndolas al aire
libre, para poder realizar el resto de subtareas con ellas. Tender
la ropa es una tarea compleja, que requiere de bimanipulación
y una gran destreza al manipular la prenda. Por ello, en este
trabajo se ha optado por abordar una versión simplicada de
la tarea de tendido, como punto de partida para llevar a cabo
investigaciones más avanzadas en el futuro. A través de una red
neuronal convolucional profunda y un conjunto de datos de
entrenamiento sintéticos, se ha llevado a cabo un estudio sobre
la capacidad de predecir el resultado de dejar caer una prenda
sobre un tendedero por parte de un robot. Este estudio, que
sirve como primer paso hacia un controlador más avanzado,
ha resultado en un modelo capaz de predecir si la prenda se
quedará tendida o no a partir de una imagen de profundidad
de la misma en la posición en la que se dejará caer.
Una vez las prendas están secas, y para facilitar su reconocimiento
por parte del robot de cara a realizar las siguientes tareas, la prenda debe ser desdoblada. El método propuesto en
este trabajo para realizar el desdoble no requiere de un modelo
previo de la prenda, y utiliza únicamente información de profundidad
y color, obtenida mediante un sensor RGB-D, para
calcular los puntos de agarre y soltado de una acción de desdoble.
Este proceso es iterativo, y se repite hasta que la prenda se
encuentra totalmente desdoblada.
Antes de almacenar la prenda, se deben eliminar las posibles
arrugas que hayan surgido en el proceso de lavado y secado.
Para ello, se propone un nuevo algoritmo de planchado, que
utiliza un descriptor de arrugas desarrollado en este trabajo para
localizar las arrugas más prominentes y generar un plan de
planchado acorde a las condiciones de la prenda. A diferencia
de otros métodos existentes, este método puede aplicarse en un
entorno doméstico, ya que no requiere de un contol preciso de
las condiciones de iluminación. Además, es capaz de usar las
mismas herramientas de planchado que usaría una persona sin
necesidad de realizar modificaciones a las mismas, a través de
un controlador que usa realimentación de fuerza para aplicar
una presión constante durante el planchado.
El último paso al hacer la colada es doblar la prenda para
almacenarla. Un aspecto importante al doblar prendas es ejecutar
cada uno de los dobleces necesarios con precisión, ya que
cada error o desfase cometido en un doblez se acumula cuando
la secuencia de doblado está formada por varios dobleces
consecutivos. Para llevar a cabo estos dobleces con la precisión
requerida, se propone un controlador basado en una red neuronal,
que utiliza realimentación visual de la forma de la prenda
durante cada operación de doblado. Esta realimentación es obtenida
a través de una red neuronal profunda entrenada con
un conjunto de entrenamiento sintético, que permite estimar
la forma en 3D de la parte a doblar a través de una imagen
monocular de la misma.
Todos los métodos descritos en esta tesis han sido validados
experimentalmente con éxito en diversas plataformas robóticas,
incluyendo un robot humanoide.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Abderrahmane Kheddar.- Secretario: Ramón Ignacio Barber Castaño.- Vocal: Karinne Ramírez-Amar
Collaborative autonomy in heterogeneous multi-robot systems
As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition.
This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems.
Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots
Sim2real transfer learning for 3D human pose estimation: motion to the rescue
Synthetic visual data can provide practically infinite diversity and rich
labels, while avoiding ethical issues with privacy and bias. However, for many
tasks, current models trained on synthetic data generalize poorly to real data.
The task of 3D human pose estimation is a particularly interesting example of
this sim2real problem, because learning-based approaches perform reasonably
well given real training data, yet labeled 3D poses are extremely difficult to
obtain in the wild, limiting scalability. In this paper, we show that standard
neural-network approaches, which perform poorly when trained on synthetic RGB
images, can perform well when the data is pre-processed to extract cues about
the person's motion, notably as optical flow and the motion of 2D keypoints.
Therefore, our results suggest that motion can be a simple way to bridge a
sim2real gap when video is available. We evaluate on the 3D Poses in the Wild
dataset, the most challenging modern benchmark for 3D pose estimation, where we
show full 3D mesh recovery that is on par with state-of-the-art methods trained
on real 3D sequences, despite training only on synthetic humans from the
SURREAL dataset.Comment: Accepted at NeurIPS 201
Multimodal human machine interactions in industrial environments
This chapter will present a review of Human Machine Interaction techniques for
industrial applications. A set of recent HMI techniques will be provided with
emphasis on multimodal interaction with industrial machines and robots. This list
will include Natural Language Processing techniques and others that make use of
various complementary interfaces: audio, visual, haptic or gestural, to achieve a
more natural human-machine interaction. This chapter will also focus on providing examples and use cases in fields related to multimodal interaction in manufacturing, such as augmented reality. Accordingly, the chapter will present the use of
Artificial Intelligence and Multimodal Human Machine Interaction in the context
of STAR applications
Robust and Versatile Bipedal Jumping Control through Reinforcement Learning
This work aims to push the limits of agility for bipedal robots by enabling a
torque-controlled bipedal robot to perform robust and versatile dynamic jumps
in the real world. We present a reinforcement learning framework for training a
robot to accomplish a large variety of jumping tasks, such as jumping to
different locations and directions. To improve performance on these challenging
tasks, we develop a new policy structure that encodes the robot's long-term
input/output (I/O) history while also providing direct access to a short-term
I/O history. In order to train a versatile jumping policy, we utilize a
multi-stage training scheme that includes different training stages for
different objectives. After multi-stage training, the policy can be directly
transferred to a real bipedal Cassie robot. Training on different tasks and
exploring more diverse scenarios lead to highly robust policies that can
exploit the diverse set of learned maneuvers to recover from perturbations or
poor landings during real-world deployment. Such robustness in the proposed
policy enables Cassie to succeed in completing a variety of challenging jump
tasks in the real world, such as standing long jumps, jumping onto elevated
platforms, and multi-axes jumps.Comment: Accepted in Robotics: Science and Systems 2023 (RSS 2023). The
accompanying video is at https://youtu.be/aAPSZ2QFB-
Real-time control architecture for a multi UAV test bed
The purpose of this thesis is to develop a control architecture running at real-time for a multi unmanned aerial vehicle test bed formed by three AscTec Hummingbird mini quadrotors. The reliable and reconfigurable architecture presented here has a FPGA-based embedded system as main controller. Under the implemented control system, different practical applications have been performed in the MARHES Lab at the University of New Mexico as part of its research in cooperative control of mobile aerial agents. This thesis also covers the quadrotor modeling, the design of a position controller, the real-time architecture implementation and the experimental flight tests. A hybrid approach combining first-principles with system identification techniques is used for modeling the quadrotor due to the lack of information around the structure of the onboard controller designed by AscTec. The complete quadrotor model structure is formed by a black-box subsystem and a point-mass submodel. Experimental data have been gathered for system identification and black-box submodel validation purposes; while the point-mass submodel is found applying rigid-body dynamics. Using the dynamical model, a position control block based in lead-lag and PI compensators is developed and simulated. Improvements in trajectory tracking performance are achieved estimating the linear velocity of the aerial robot and incorporating velocity lead-lag compensators to the control approach. The velocity of the aerial robot is computed by numerical differentiation of position data. Simulation results to a variety of input signals of the control block in cascade with the complete dynamic model of the quadrotor are included. The control block together with the velocity estimation is fully programmed in the embedded controller. A graphical user interface, GUI, as part of the architecture is designed to display real-time data of position and orientation streamed from the motion tracking system as well as to contain useful user controllers. This GUI facilitates that a single operator conducts and oversees all aspects of the different applications where one or multiple quadrotors are used. Experimental tests have helped to tune the control parameters determined by simulation. The performance of the whole architecture has been validated through a variety of practical applications. Autonomous take off, hovering and landing, target surveillance, trajectory tracking and suspended payload transportation are just some of the applications carried out employing the real-time control architecture proposed in this thesis
Brain-Inspired Spiking Neural Network Controller for a Neurorobotic Whisker System
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model for studying active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modeling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Human Brain Project's Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was adequately connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behavior experimentally recorded in mice
- …