589 research outputs found
A Certified-Complete Bimanual Manipulation Planner
Planning motions for two robot arms to move an object collaboratively is a
difficult problem, mainly because of the closed-chain constraint, which arises
whenever two robot hands simultaneously grasp a single rigid object. In this
paper, we propose a manipulation planning algorithm to bring an object from an
initial stable placement (position and orientation of the object on the support
surface) towards a goal stable placement. The key specificity of our algorithm
is that it is certified-complete: for a given object and a given environment,
we provide a certificate that the algorithm will find a solution to any
bimanual manipulation query in that environment whenever one exists. Moreover,
the certificate is constructive: at run-time, it can be used to quickly find a
solution to a given query. The algorithm is tested in software and hardware on
a number of large pieces of furniture.Comment: 12 pages, 7 figures, 1 tabl
Differentiable Algorithm Networks for Composable Robot Learning
This paper introduces the Differentiable Algorithm Network (DAN), a
composable architecture for robot learning systems. A DAN is composed of neural
network modules, each encoding a differentiable robot algorithm and an
associated model; and it is trained end-to-end from data. DAN combines the
strengths of model-driven modular system design and data-driven end-to-end
learning. The algorithms and models act as structural assumptions to reduce the
data requirements for learning; end-to-end learning allows the modules to adapt
to one another and compensate for imperfect models and algorithms, in order to
achieve the best overall system performance. We illustrate the DAN methodology
through a case study on a simulated robot system, which learns to navigate in
complex 3-D environments with only local visual observations and an image of a
partially correct 2-D floor map.Comment: RSS 2019 camera ready. Video is available at
https://youtu.be/4jcYlTSJF4
Simulateur tutoriel intelligent pour les opérations robotisées application au bras canadien sur la station spatiale internationale
Cette thèse a pour objectif de développer un simulateur tutoriel intelligent pour l'apprentissage de manipulations robotisées, applicable au bras robot canadien sur la station spatiale internationale. Le simulateur appelé Roman Tutor est une preuve de concept de simulateur d'apprentissage autonome et continu pour des manipulations robotisées complexes. Un tel concept est notamment pertinent pour les futures missions spatiales sur Mars ou sur la Lune, et ce en dépit de l'inadéquation du bras canadien pour de telles missions en raison de sa trop grande complexité. Le fait de démontrer la possibilité de conception d'un simulateur capable, dans une certaine mesure, de donner des rétroactions similaires à celles d'un enseignant humain, pourrait inspirer de nouvelles idées pour des concepts similaires, applicables à des robots plus simples, qui seraient utilisés dans les prochaines missions spatiales. Afin de réaliser ce prototype, il est question de développer et d'intégrer trois composantes originales : premièrement, un planificateur de trajectoires pour des environnements dynamiques présentant des contraintes dures et flexibles ; deuxièmement, un générateur automatique de démonstrations de tâches, lequel fait appel au planificateur de trajectoires pour trouver une trajectoire solution à une tâche de déplacement du bras robot et à des techniques de planification des animations pour filmer la solution obtenue ; et troisièmement, un modèle pédagogique implémentant des stratégies d'intervention pour donner de l'aide à un opérateur manipulant le SSRMS. L'assistance apportée à un opérateur sur Roman Tutor fait appel d'une part à des démonstrations de tâches générées par le générateur automatique de démonstrations, et d'autre part au planificateur de trajectoires pour suivre la progression de l'opérateur sur sa tâche, lui fournir de l'aide et le corriger au besoin
Advancing Robot Autonomy for Long-Horizon Tasks
Autonomous robots have real-world applications in diverse fields, such as
mobile manipulation and environmental exploration, and many such tasks benefit
from a hands-off approach in terms of human user involvement over a long task
horizon. However, the level of autonomy achievable by a deployment is limited
in part by the problem definition or task specification required by the system.
Task specifications often require technical, low-level information that is
unintuitive to describe and may result in generic solutions, burdening the user
technically both before and after task completion. In this thesis, we aim to
advance task specification abstraction toward the goal of increasing robot
autonomy in real-world scenarios. We do so by tackling problems that address
several different angles of this goal. First, we develop a way for the
automatic discovery of optimal transition points between subtasks in the
context of constrained mobile manipulation, removing the need for the human to
hand-specify these in the task specification. We further propose a way to
automatically describe constraints on robot motion by using demonstrated data
as opposed to manually-defined constraints. Then, within the context of
environmental exploration, we propose a flexible task specification framework,
requiring just a set of quantiles of interest from the user that allows the
robot to directly suggest locations in the environment for the user to study.
We next systematically study the effect of including a robot team in the task
specification and show that multirobot teams have the ability to improve
performance under certain specification conditions, including enabling
inter-robot communication. Finally, we propose methods for a communication
protocol that autonomously selects useful but limited information to share with
the other robots.Comment: PhD dissertation. 160 page
- …