26 research outputs found

    Stable locomotion of humanoid robots based on mass concentrated model

    Get PDF
    El estudio de la locomoción de robots humanoides es actualmente un área muy activa, en el campo de la robótica. Partiendo del principio que el hombre esta construyendo robots para trabajar juntos cooperando en ambientes humanos. La estabilidad durante la caminata es un factor crítico que prevee la caída del robot, la cual puede causar deterioros al mismo y a las personas en su entorno. De esta manera, el presente trabajo pretende resolver una parte del problema de la locomoción bípeda, esto es los métodos empleados para “La generación del paso” (“Gait generation”) y asi obtener la caminata estable. Para obtener una marcha estable se utilizan modelos de masa concentrada. De esta manera el modelo del “pendulo invertido simple” y el modelo del “carro sobre la mesa” se han utilizado para conseguir la marcha estable de robots humanoides. En el modelo del pendulo invertido, la masa el pendulo conduce el movimiento del centro de gravedad (CDG) del robot humanoide durante la marcha. Se detallara que el CDG se mueve como una bola libre sobre un plano bajo las leyes del pendulo en el campo de gravedad. Mientras que en el modelo del “carro sobre la mesa”, el carro conduce el movimiento del CDG durante la marcha. En este caso, el movimiento del carro es tratado como un sistema servocontrolado, y el movimiento del CDG es obtenido con los actuales y futuros estados de referencia del Zero Moment Point (ZMP). El método para generar el paso propuesto esta compuesto de varias capas como son Movimiento global, movimiento local, generación de patrones de movimiento, cinemática inversa y dinámica inversa y finalmente una corrección off-line. Donde la entrada en este método es la meta global (es decir la configuración final del robot, en el entorno de marcha) y las salidas son los patrones de movimiento de las articulaciones junto con el patrón de referencia del ZMP. Por otro lado, se ha propuesto el método para generar el “Paso acíclico”. Este método abarca el movimiento del paso dinámico incluyendo todo el cuerpo del robot humanoide, desde desde cuaquier postura genérica estáticamente estable hasta otra; donde las entradas son los estados inicial y final del robot (esto es los ángulos iniciales y finales de las articulaciones) y las salidas son las trayectorias de referencia de cada articulación y del ZMP. Se han obtenido resultados satisfactorios en las simulaciones y en el robot humanoide real Rh-1 desarrollado en el Robotics lab de la Universidad Carlos III de Madrid. De igual manera el movimiento innovador llamado “Paso acíclico” se ha implemenado exitosamente en el robot humanoide HRP-2 (desarrollado por el AIST e Industrias Kawada Inc., Japon). Finalmente los resultados, contribuciones y trabajos futuros se expondran y discutirán. _______________________________________________The study of humanoid robot locomotion is currently a very active area in robotics, since humans build robots to work their environments in common cooperation and in harmony. Stability during walking motion is a critical fact in preventing the robot from falling down and causing the human or itself damages. This work tries to solve a part of the locomotion problem, which is, the “Gait Generation” methods used to obtain stable walking. Mass concentrated models are used to obtain stable walking motion. Thus the inverted pendulum model and the cart-table model are used to obtain stable walking motion in humanoid robots. In the inverted pendulum model, the mass of the pendulum drives the center of gravity (COG) motion of the humanoid robot while it is walking. It will be detailed that the COG moves like a free ball on a plane under the laws of the pendulum in the field of gravity. While in the cart-table model, the cart drives the COG motion during walking motion. In this case, the cart motion is treated as a servo control system, obtaining its motion from future reference states of the ZMP. The gait generation method proposed has many layers like Global motion, local motion, motion patterns generation, inverse kinematics and inverse dynamics and finally off-line correction. When the input in the gait generation method is the global goal (that is the final configuration of the robot in walking environment), and the output is the joint patterns and ZMP reference patterns. Otherwise, the “Acyclic gait” method is proposed. This method deals with the whole body humanoid robot dynamic step motion from any generic posture to another one when the input is the initial and goal robot states (that is the initial and goal joint angles) and the output is the joint and ZMP reference patterns. Successful simulation and actual results have been obtained with the Rh- 1 humanoid robot developed in the Robotics lab (Universidad Carlos III de Madrid, Spain) and the innovative motion called “Acyclic gait” implemented in the HRP-2 humanoid robot platform (developed by the AIST and Kawada Industries Inc., Japan). Furthermore, the results, contributions and future works will be discussed

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry

    Computationally efficient deformable 3D object tracking with a monocular RGB camera

    Get PDF
    182 p.Monocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices

    Computationally efficient deformable 3D object tracking with a monocular RGB camera

    Get PDF
    182 p.Monocular RGB cameras are present in most scopes and devices, including embedded environments like robots, cars and home automation. Most of these environments have in common a significant presence of human operators with whom the system has to interact. This context provides the motivation to use the captured monocular images to improve the understanding of the operator and the surrounding scene for more accurate results and applications.However, monocular images do not have depth information, which is a crucial element in understanding the 3D scene correctly. Estimating the three-dimensional information of an object in the scene using a single two-dimensional image is already a challenge. The challenge grows if the object is deformable (e.g., a human body or a human face) and there is a need to track its movements and interactions in the scene.Several methods attempt to solve this task, including modern regression methods based on Deep NeuralNetworks. However, despite the great results, most are computationally demanding and therefore unsuitable for several environments. Computational efficiency is a critical feature for computationally constrained setups like embedded or onboard systems present in robotics and automotive applications, among others.This study proposes computationally efficient methodologies to reconstruct and track three-dimensional deformable objects, such as human faces and human bodies, using a single monocular RGB camera. To model the deformability of faces and bodies, it considers two types of deformations: non-rigid deformations for face tracking, and rigid multi-body deformations for body pose tracking. Furthermore, it studies their performance on computationally restricted devices like smartphones and onboard systems used in the automotive industry. The information extracted from such devices gives valuable insight into human behaviour a crucial element in improving human-machine interaction.We tested the proposed approaches in different challenging application fields like onboard driver monitoring systems, human behaviour analysis from monocular videos, and human face tracking on embedded devices

    A Retro-Projected Robotic Head for Social Human-Robot Interaction

    Get PDF
    As people respond strongly to faces and facial features, both con- sciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an orig- inal contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving soft- ware designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial ex- pressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncon- trolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evalua- tions present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Industrial Robotics

    Get PDF
    This book covers a wide range of topics relating to advanced industrial robotics, sensors and automation technologies. Although being highly technical and complex in nature, the papers presented in this book represent some of the latest cutting edge technologies and advancements in industrial robotics technology. This book covers topics such as networking, properties of manipulators, forward and inverse robot arm kinematics, motion path-planning, machine vision and many other practical topics too numerous to list here. The authors and editor of this book wish to inspire people, especially young ones, to get involved with robotic and mechatronic engineering technology and to develop new and exciting practical applications, perhaps using the ideas and concepts presented herein

    Motion Synthesis and Control for Autonomous Agents using Generative Models and Reinforcement Learning

    Get PDF
    Imitating and predicting human motions have wide applications in both graphics and robotics, from developing realistic models of human movement and behavior in immersive virtual worlds and games to improving autonomous navigation for service agents deployed in the real world. Traditional approaches for motion imitation and prediction typically rely on pre-defined rules to model agent behaviors or use reinforcement learning with manually designed reward functions. Despite impressive results, such approaches cannot effectively capture the diversity of motor behaviors and the decision making capabilities of human beings. Furthermore, manually designing a model or reward function to explicitly describe human motion characteristics often involves laborious fine-tuning and repeated experiments, and may suffer from generalization issues. In this thesis, we explore data-driven approaches using generative models and reinforcement learning to study and simulate human motions. Specifically, we begin with motion synthesis and control of physically simulated agents imitating a wide range of human motor skills, and then focus on improving the local navigation decisions of autonomous agents in multi-agent interaction settings. For physics-based agent control, we introduce an imitation learning framework built upon generative adversarial networks and reinforcement learning that enables humanoid agents to learn motor skills from a few examples of human reference motion data. Our approach generates high-fidelity motions and robust controllers without needing to manually design and finetune a reward function, allowing at the same time interactive switching between different controllers based on user input. Based on this framework, we further propose a multi-objective learning scheme for composite and task-driven control of humanoid agents. Our multi-objective learning scheme balances the simultaneous learning of disparate motions from multiple reference sources and multiple goal-directed control objectives in an adaptive way, enabling the training of efficient composite motion controllers. Additionally, we present a general framework for fast and robust learning of motor control skills. Our framework exploits particle filtering to dynamically explore and discretize the high-dimensional action space involved in continuous control tasks, and provides a multi-modal policy as a substitute for the commonly used Gaussian policies. For navigation learning, we leverage human crowd data to train a human-inspired collision avoidance policy by combining knowledge distillation and reinforcement learning. Our approach enables autonomous agents to take human-like actions during goal-directed steering in fully decentralized, multi-agent environments. To inform better control in such environments, we propose SocialVAE, a variational autoencoder based architecture that uses timewise latent variables with socially-aware conditions and a backward posterior approximation to perform agent trajectory prediction. Our approach improves current state-of-the-art performance on trajectory prediction tasks in daily human interaction scenarios and more complex scenes involving interactions between NBA players. We further extend SocialVAE by exploiting semantic maps as context conditions to generate map-compliant trajectory prediction. Our approach processes context conditions and social conditions occurring during agent-agent interactions in an integrated manner through the use of a dual-attention mechanism. We demonstrate the real-time performance of our approach and its ability to provide high-fidelity, multi-modal predictions on various large-scale vehicle trajectory prediction tasks
    corecore