58 research outputs found

    Whole-Body Dynamic Telelocomotion: A Step-to-Step Dynamics Approach to Human Walking Reference Generation

    Full text link
    Teleoperated humanoid robots hold significant potential as physical avatars for humans in hazardous and inaccessible environments, with the goal of channeling human intelligence and sensorimotor skills through these robotic counterparts. Precise coordination between humans and robots is crucial for accomplishing whole-body behaviors involving locomotion and manipulation. To progress successfully, dynamic synchronization between humans and humanoid robots must be achieved. This work enhances advancements in whole-body dynamic telelocomotion, addressing challenges in robustness. By embedding the hybrid and underactuated nature of bipedal walking into a virtual human walking interface, we achieve dynamically consistent walking gait generation. Additionally, we integrate a reactive robot controller into a whole-body dynamic telelocomotion framework. Thus, allowing the realization of telelocomotion behaviors on the full-body dynamics of a bipedal robot. Real-time telelocomotion simulation experiments validate the effectiveness of our methods, demonstrating that a trained human pilot can dynamically synchronize with a simulated bipedal robot, achieving sustained locomotion, controlling walking speeds within the range of 0.0 m/s to 0.3 m/s, and enabling backward walking for distances of up to 2.0 m. This research contributes to advancing teleoperated humanoid robots and paves the way for future developments in synchronized locomotion between humans and bipedal robots.Comment: 8 pages, 8 figure

    Methods to improve the coping capacities of whole-body controllers for humanoid robots

    Get PDF
    Current applications for humanoid robotics require autonomy in an environment specifically adapted to humans, and safe coexistence with people. Whole-body control is promising in this sense, having shown to successfully achieve locomotion and manipulation tasks. However, robustness remains an issue: whole-body controllers can still hardly cope with unexpected disturbances, with changes in working conditions, or with performing a variety of tasks, without human intervention. In this thesis, we explore how whole-body control approaches can be designed to address these issues. Based on whole-body control, contributions have been developed along three main axes: joint limit avoidance, automatic parameter tuning, and generalizing whole-body motions achieved by a controller. We first establish a whole-body torque-controller for the iCub, based on the stack-of-tasks approach and proposed feedback control laws in SE(3). From there, we develop a novel, theoretically guaranteed joint limit avoidance technique for torque-control, through a parametrization of the feasible joint space. This technique allows the robot to remain compliant, while resisting external perturbations that push joints closer to their limits, as demonstrated with experiments in simulation and with the real robot. Then, we focus on the issue of automatically tuning parameters of the controller, in order to improve its behavior across different situations. We show that our approach for learning task priorities, combining domain randomization and carefully selected fitness functions, allows the successful transfer of results between platforms subjected to different working conditions. Following these results, we then propose a controller which allows for generic, complex whole-body motions through real-time teleoperation. This approach is notably verified on the robot to follow generic movements of the teleoperator while in double support, as well as to follow the teleoperator\u2019s upper-body movements while walking with footsteps adapted from the teleoperator\u2019s footsteps. The approaches proposed in this thesis therefore improve the capability of whole-body controllers to cope with external disturbances, different working conditions and generic whole-body motions

    Télé-opération Corps Complet de Robots Humanoïdes

    Get PDF
    This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thèse vise à étudier des systèmes et des outils pour la télé-opération d’un robot humanoïde.La téléopération de robots est cruciale pour envoyer et contrôler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scénarios d’interventionen cas de catastrophe, des environnements contaminés ou des sites extraterrestres). Le terme téléopérationdésigne le plus souvent le contrôle direct et continu d’un robot. Dans ce cas, l’opérateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrôle. L’un des principaux défis est de contrôler le robot de manière à garantir son équilibredynamique tout en essayant de suivre les références humaines. De plus, l’opérateur humain abesoin d’un retour d’information sur l’état du robot et de son site via des capteurs à distance afind’appréhender la situation ou de se sentir physiquement présent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le réseau de communicationn’est pas idéal. Dans ce cas, les commandes de l’homme au robot ainsi que la rétroaction du robotà l’homme peuvent être retardées. Ces délais peuvent être très gênants pour l’opérateur humain,qui ne peut pas télé-opérer efficacement son avatar robotique.Un autre point crucial à considérer lors de la mise en place d’un système de télé-opérationest le grand nombre de paramètres qui doivent être réglés pour contrôler efficacement les robotstélé-opérés. Des approches d’apprentissage automatique et des optimiseurs stochastiques peuventêtre utilisés pour automatiser l’apprentissage de certains paramètres.Dans cette thèse, nous avons proposé un système de télé-opération qui a été testé sur le robothumanoïde iCub. Nous avons utilisé une combinaison de capture de mouvement basée sur latechnologie inertielle comme périphérique de contrôle pour l’humanoïde et un casque de réalitévirtuelle connecté aux caméras du robot pour obtenir un retour visuel. Nous avons d’abord traduitles mouvements humains en mouvements robotiques équivalents en développant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant d’assurer lafaisabilité du mouvement transféré. Nous avons ensuite implémenté un contrôleur du corps entierpour permettre au robot de suivre le mouvement humain reciblé. Le contrôleur a ensuite étéoptimisé en simulation pour obtenir un bon suivi des mouvements de référence du corps entier,en recourant à un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot réel en quelques essais.Pour télé-opérer les mouvements de marche, nous avons implémenté un mode de télé-opérationde niveau supérieur dans lequel l’utilisateur peut utiliser un joystick pour envoyer des commandesde référence au robot. Nous avons intégré ce paramètre dans le système de télé-opération, ce quipermet à l’utilisateur de basculer entre les deux modes différents.Un problème majeur empêchant le déploiement de tels systèmes dans des applications réellesest la présence de retards de communication entre l’entrée humaine et le retour du robot: mêmequelques centaines de millisecondes de retard peuvent irrémédiablement perturber l’opérateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un système danslequel un robot humanoïde exécute des commandes avant de les recevoir, de sorte que le retourvisuel semble être synchronisé avec l’opérateur, alors que le robot exécutait les commandes dansle passé. Pour ce faire, le robot prédit en permanence les commandes futures en interrogeant unmodèle d’apprentissage automatique formé sur les trajectoires passées et conditionné aux dernièrescommandes reçues

    광섬유 힘 센서가 내장된 로봇 원격 및 무인 조작을 위한 모듈화 로봇 스킨

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 공과대학 기계공학부, 2021.8. 박용래.Robots have been used to replace human workers for dangerous and difficult tasks that require human-like dexterity. To perform sophisticated tasks, force and tactile sensing is one of the key requirements to achieve dexterous manipulation. Robots equipped with sensitive skin that can play a role of mechanoreception in animals will be able to perform tasks with high levels of dexterity. In this research, we propose modularized robotic skin that is capable of not only localizing external contacts but also estimating the magnitudes of the contact forces. In order to acquire three pieces of key information on a contact, such as contact locations in horizontal and vertical directions and the magnitude of the force, each skin module requires three degrees of freedom in sensing. In the proposed skin, force sensing is achieved by a custom-designed triangular beam structure. A force applied to the outer surface of the skin module is transmitted to the beam structure underneath, and bending of the beam is detected by fiber optic strain sensors, called fiber Bragg gratings. The proposed skin shows resolutions of 1.45 N for force estimation and 1.85 mm and 1.91 mm for contact localization in horizontal and vertical directions, respectively. We also demonstrate applications of the proposed skin for remote and autonomous operations of commercial robotic arms equipped with an array of the skin modules.로봇은 인간과 같은 높은 조작성이 필요한 어려운 작업 환경이나 위험한 환경에서 인간을 대체할 수 있도록 연구되고 있다. 이를 위해 동물의 기계적 감응(mechanoreception) 역할과 같은 기능을 수행하면서 로봇에 부착될 수 있는 스킨을 연구하고 있고, 민감한 로봇 스킨이 부착된 로봇은 높은 수준의 조작성을 가지고 주어진 작업을 성공할 수 있다. 다시 말해 로봇의 힘 센싱과 촉각 센싱 기능은 정교한 로봇 조작의 핵심 요소들 중 하나로 로봇의 세밀한 작업들을 수행하기 필요로 하다. 따라서 우리는 이 연구에서 외부 접촉의 위치뿐만 아니라 외력의 크기도 추정할 수 있는 모듈화된 로봇 스킨을 제안한다. 접촉 힘의 크기, 접촉의 수직 및 수평 위치 등 접촉에 대한 3가지 정보를 얻기 위해서 각 스킨 모듈은 3 자유도를 가지도록 설계하였다. 제안한 스킨에서 힘 센싱은 새롭게 설계한 삼각형 형태의 빔 구조의 변형을 통해서 측정한다. 구체적으로 스킨 모듈의 외피에 가해진 힘은 빔 구조로 전달되고, 이로 인해 발생하는 빔 구조의 변형은 “fiber Bragg gratings” 이라고 불리는 광섬유 스트레인 센서들에 의해서 측정된다. 제안한 스킨은 1.45 N의 힘 추정 해상도를 가지고, 수평 및 수직 위치 추정은 각각 1.85 mm와 1.91 mm의 해상도를 가진다. 우리는 상용화된 로봇팔에 여러 개의 스킨 모듈을 배열 및 부착하여 로봇의 원격 조작 및 무인 조작을 실행하였고, 스킨의 활용성을 검증하였다.1. Introduction 1 2. Design 7 2.1. Skin Module . 2.2. Skin Array . 3. Modeling 12 3.1. FBG Sensing Principle and Temperature Compensation 25 3.2. Estimation of Beam Force and Deflection . 3.3. Estimation of Spring Force . 3.4. Estimation of Contact Locations and Force . 4. Experiments 25 4.1. Experimental Setup . 4.2. Initialization . 4.3. Parameter Optimization . 4.4. Result . 5. Application 32 5.1. Remote Robot Manipulation . 5.2. Autonomous Robot Control . 6. Discussion 46 7. Conclusion 48 8. Appendix 49 8.1. Beam Deflection . Bibliography 52 Abstract in Korean 60석

    A robot learning method with physiological interface for teleoperation systems

    Get PDF
    The human operator largely relies on the perception of remote environmental conditions to make timely and correct decisions in a prescribed task when the robot is teleoperated in a remote place. However, due to the unknown and dynamic working environments, the manipulator's performance and efficiency of the human-robot interaction in the tasks may degrade significantly. In this study, a novel method of human-centric interaction, through a physiological interface was presented to capture the information details of the remote operation environments. Simultaneously, in order to relieve workload of the human operator and to improve efficiency of the teleoperation system, an updated regression method was proposed to build up a nonlinear model of demonstration for the prescribed task. Considering that the demonstration data were of various lengths, dynamic time warping algorithm was employed first to synchronize the data over time before proceeding with other steps. The novelty of this method lies in the fact that both the task-specific information and the muscle parameters from the human operator have been taken into account in a single task; therefore, a more natural and safer interaction between the human and the robot could be achieved. The feasibility of the proposed method was demonstrated by experimental results

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Optimizing Programming by Demonstration for in-contact task models by Incremental Learning

    Get PDF
    Despite the increasing usage of robots for industrial applications, many aspects prevent robots from being used in daily life. One of these aspects is that extensive knowledge in programming a robot is necessary to make the robot achieve a desired task. Conventional robot programming is complex, time consuming and expensive, as every aspect of a task has to be considered. Novel intuitive and easy to use methods to program robots are necessary to facilitate the usage in daily life. This thesis proposes an approach that allows a novice user to program a robot by demonstration and provides assistance to incrementally refine the trained skill. The user utilizes kinesthetic teaching to provide an initial demonstration to the robot. Based on the information extracted from this demonstration the robot starts executing the demonstrated task. The assistance system allows the user to train the robot during the execution and thus refine the model of the task. Experiments with a KUKA LWR4+ industrial robot evaluate the performance of the assistance system and advantages over unassisted approaches. Furthermore a user study is performed to evaluate the interaction between a novice user and robot

    Towards a Framework for Embodying Any-Body through Sensory Translation and Proprioceptive Remapping: A Pilot Study

    Get PDF
    We address the problem of physical avatar embodiment and investi- gate the most general factors that may allow a person to “wear” an- other body, different from her own. A general approach is required to exploit the fact that an avatar can have any kind of body. With this pilot study we introduce a conceptual framework for the design of non-anthropomorphic embodiment, to foster immersion and user engagement. The person is interfaced with the avatar, a robot, through a system that induces a divergent internal sensorimotor mapping while controlling the avatar, to create an immersive expe- rience. Together with the conceptual framework, we present two implementations: a prototype tested in the lab and an interactive in- stallation exhibited to general public. These implementations consist of a wheeled robot, and control and sensory feedback systems. The control system includes mechanisms that both detect and resist the user’s movement, increasing the sense of connection with the avatar; the feedback system is a virtual reality (VR) environment represent- ing the avatar’s unique perception, combining sensor and control in- formation to generate visual cues. Data gathered from users indicate that the systems implemented following the proposed framework create a challenging and engaging experience, thus providing solid ground for further developments

    Predictive Whole-Body Control of Humanoid Robot Locomotion

    Get PDF
    Humanoid robots are machines built with an anthropomorphic shape. Despite decades of research into the subject, it is still challenging to tackle the robot locomotion problem from an algorithmic point of view. For example, these machines cannot achieve a constant forward body movement without exploiting contacts with the environment. The reactive forces resulting from the contacts are subject to strong limitations, complicating the design of control laws. As a consequence, the generation of humanoid motions requires to exploit fully the mathematical model of the robot in contact with the environment or to resort to approximations of it. This thesis investigates predictive and optimal control techniques for tackling humanoid robot motion tasks. They generate control input values from the system model and objectives, often transposed as cost function to minimize. In particular, this thesis tackles several aspects of the humanoid robot locomotion problem in a crescendo of complexity. First, we consider the single step push recovery problem. Namely, we aim at maintaining the upright posture with a single step after a strong external disturbance. Second, we generate and stabilize walking motions. In addition, we adopt predictive techniques to perform more dynamic motions, like large step-ups. The above-mentioned applications make use of different simplifications or assumptions to facilitate the tractability of the corresponding motion tasks. Moreover, they consider first the foot placements and only afterward how to maintain balance. We attempt to remove all these simplifications. We model the robot in contact with the environment explicitly, comparing different methods. In addition, we are able to obtain whole-body walking trajectories automatically by only specifying the desired motion velocity and a moving reference on the ground. We exploit the contacts with the walking surface to achieve these objectives while maintaining the robot balanced. Experiments are performed on real and simulated humanoid robots, like the Atlas and the iCub humanoid robots
    corecore