1,166 research outputs found
Recommended from our members
End to End Learning in Autonomous Driving Systems
Convolutional neural networks have advanced visual perception significantly in recent years. Two major ingredients that enable such a success are the composition of simple modules into a complex network and the end to end optimization. However, such success has not yet revolutionized robotics as much as vision, even if robotics suffer from similar problems as traditional computer vision, i.e. imperfectness of the manual pipeline design of the system. This thesis investigates using end-to-end learning for the autonomous driving system, a concrete robotic application. End to end learning can produce reasonable driving behaviors, even in the complex urban driving scenarios. Representation learning in end-to-end driving models is crucial, and auxiliary vision tasks such as semantic segmentation can help to form a more informative driving representation especially when training data is limited. Naive convolutional neural networks are usually only capable of doing reactive control and can not involve complex reasoning in a particular scenario. This thesis also studies how to handle scene conditioned driving behavior, which goes beyond the capability of reactive control. Alongside the end-to-end structure, learning methods also play a critical role. Imitation learning methods will acquire meaningful behaviors but usually, the robot can not master the skill. Reinforcement learning, on the contrary, either barely learns anything if the environment is too complex, or it can master the skill otherwise. To get the best of both worlds, this thesis proposes an algorithmically unified method to learn from both demonstration data and the environment
Humanoid Robots
For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning
Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons
Recommended from our members
Visual Dynamics Models for Robotic Planning and Control
For a robot to interact with its environment, it must perceive the world and understand how the world evolves as a consequence of its actions. This thesis studies a few methods that a robot can use to respond to its observations, with a focus on instances that can leverage visual dynamic models. In general, these are models of how the visual observations of a robot evolves as a consequence of its actions. This could be in the form of predictive models that directly predict the future in the space of image pixels, in the space of visual features extracted from these images, or in the space of compact learned latent representations. The three instances that this thesis studies are in the context of visual servoing, visual planning, and representation learning for reinforcement learning. In the first case, we combine learned visual features with learning single-step predictive dynamics models and reinforcement learning to learn visual servoing mechanisms. In the second case, we use a deterministic multi-step video prediction model to achieve various manipulation tasks through visual planning. In addition, we show that conventional video prediction models are unequipped to model uncertainty and multiple futures, which could limit the planning capabilities of the robot. To address this, we propose a stochastic video prediction model that is trained with a combination of variational losses, adversarial losses, and perceptual losses, and show that this model can predict futures that are more realistic, diverse, and accurate. Unlike the first two cases, in which the dynamics model is used to make predictions for decision-making, the third case learns the model solely for representation learning. We learn a stochastic sequential latent variable model to learn a latent representation, and then use it as an intermediate representation for reinforcement learning. We show that this approach improves final performance and sample efficiency
Generating whole body movements for dynamics anthropomorphic systems under constraints
Cette thèse étudie la question de la génération de mouvements corps-complet pour des systèmes anthropomorphes. Elle considère le problème de la modélisation et de la commande en abordant la question difficile de la génération de mouvements ressemblant à ceux de l'homme. En premier lieu, un modèle dynamique du robot humanoïde HRP-2 est élaboré à partir de l'algorithme récursif de Newton-Euler pour les vecteurs spatiaux. Un nouveau schéma de commande dynamique est ensuite développé, en utilisant une cascade de programmes quadratiques (QP) optimisant des fonctions coûts et calculant les couples de commande en satisfaisant des contraintes d'égalité et d'inégalité. La cascade de problèmes quadratiques est définie par une pile de tâches associée à un ordre de priorité. Nous proposons ensuite une formulation unifiée des contraintes de contacts planaires et nous montrons que la méthode proposée permet de prendre en compte plusieurs contacts non coplanaires et généralise la contrainte usuelle du ZMP dans le cas où seulement les pieds sont en contact avec le sol. Nous relions ensuite les algorithmes de génération de mouvement issus de la robotique aux outils de capture du mouvement humain en développant une méthode originale de génération de mouvement visant à imiter le mouvement humain. Cette méthode est basée sur le recalage des données capturées et l'édition du mouvement en utilisant le solveur hiérarchique précédemment introduit et la définition de tâches et de contraintes dynamiques. Cette méthode originale permet d'ajuster un mouvement humain capturé pour le reproduire fidèlement sur un humanoïde en respectant sa propre dynamique. Enfin, dans le but de simuler des mouvements qui ressemblent à ceux de l'homme, nous développons un modèle anthropomorphe ayant un nombre de degrés de liberté supérieur à celui du robot humanoïde HRP2. Le solveur générique est utilisé pour simuler le mouvement sur ce nouveau modèle. Une série de tâches est définie pour décrire un scénario joué par un humain. Nous montrons, par une simple analyse qualitative du mouvement, que la prise en compte du modèle dynamique permet d'accroitre naturellement le réalisme du mouvement.This thesis studies the question of whole body motion generation for anthropomorphic systems. Within this work, the problem of modeling and control is considered by addressing the difficult issue of generating human-like motion. First, a dynamic model of the humanoid robot HRP-2 is elaborated based on the recursive Newton-Euler algorithm for spatial vectors. A new dynamic control scheme is then developed adopting a cascade of quadratic programs (QP) optimizing the cost functions and computing the torque control while satisfying equality and inequality constraints. The cascade of the quadratic programs is defined by a stack of tasks associated to a priority order. Next, we propose a unified formulation of the planar contact constraints, and we demonstrate that the proposed method allows taking into account multiple non coplanar contacts and generalizes the common ZMP constraint when only the feet are in contact with the ground. Then, we link the algorithms of motion generation resulting from robotics to the human motion capture tools by developing an original method of motion generation aiming at the imitation of the human motion. This method is based on the reshaping of the captured data and the motion editing by using the hierarchical solver previously introduced and the definition of dynamic tasks and constraints. This original method allows adjusting a captured human motion in order to reliably reproduce it on a humanoid while respecting its own dynamics. Finally, in order to simulate movements resembling to those of humans, we develop an anthropomorphic model with higher number of degrees of freedom than the one of HRP-2. The generic solver is used to simulate motion on this new model. A sequence of tasks is defined to describe a scenario played by a human. By a simple qualitative analysis of motion, we demonstrate that taking into account the dynamics provides a natural way to generate human-like movements
- …