142 research outputs found

    Distributed Bio-inspired Humanoid Posture Control

    Full text link
    This paper presents an innovative distributed bio-inspired posture control strategy for a humanoid, employing a balance control system DEC (Disturbance Estimation and Compensation). Its inherently modular structure could potentially lead to conflicts among modules, as already shown in literature. A distributed control strategy is presented here, whose underlying idea is to let only one module at a time perform balancing, whilst the other joints are controlled to be at a fixed position. Modules agree, in a distributed fashion, on which module to enable, by iterating a max-consensus protocol. Simulations performed with a triple inverted pendulum model show that this approach limits the conflicts among modules while achieving the desired posture and allows for saving energy while performing the task. This comes at the cost of a higher rise time.Comment: 2019 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC

    Human-Likeness Indicator for Robot Posture Control and Balance

    Full text link
    Similarly to humans, humanoid robots require posture control and balance to walk and interact with the environment. In this work posture control in perturbed conditions is evaluated as a performance test for humanoid control. A specific performance indicator is proposed: the score is based on the comparison between the body sway of the tested humanoid standing on a moving surface and the sway produced by healthy subjects performing the same experiment. This approach is here oriented to the evaluation of a human-likeness. The measure is tested using a humanoid robot in order to demonstrate a typical usage of the proposed evaluation scheme and an example of how to improve robot control on the basis of such a performance indicator scoreComment: 16 pages, 5 Figures. arXiv admin note: substantial text overlap with arXiv:2110.1439

    Human inspired humanoid robots control architecture

    Get PDF
    This PhD Thesis tries to present a different point of view when talking about the development of control architectures for humanoid robots. Specifically, this Thesis is focused on studying the human postural control system as well as on the use of this knowledge to develop a novel architecture for postural control in humanoid robots. The research carried on in this thesis shows that there are two types of components for postural control: a reactive one, and other predictive or anticipatory. This work has focused on the development of the second component through the implementation of a predictive system complementing the reactive one. The anticipative control system has been analysed in the human case and it has been extrapolated to the architecture for controlling the humanoid robot TEO. In this way, its different components have been developed based on how humans work without forgetting the tasks it has been designed for. This control system is based on the composition of sensorial perceptions, the evaluation of stimulus through the use of the psychophysics theory of the surprise, and the creation of events that can be used for activating some reaction strategies (synergies) The control system developed in this Thesis, as well as the human being does, processes information coming from different sensorial sources. It also composes the named perceptions, which depend on the type of task the postural control acts over. The value of those perceptions is obtained using bio-inspired evaluation techniques of sensorial inference. Once the sensorial input has been obtained, it is necessary to process it in order to foresee possible disturbances that may provoke an incorrect performance of a task. The system developed in this Thesis evaluates the sensorial information, previously transformed into perceptions, through the use of the “Surprise Theory”, and it generates some events called “surprises” used for predicting the evolution of a task. Finally, the anticipative system for postural control can compose, if necessary, the proper reactions through the use of predefined movement patterns called synergies. Those reactions can complement or substitute completely the normal performance of a task. The performance of the anticipative system for postural control as well as the performance of each one of its components have been tested through simulations and the application of the results in the humanoid robot TEO from the RoboticsLab research group in the Systems Engineering and Automation Department from the Carlos III University of Madrid. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Esta Tesis Doctoral pretende aportar un punto de vista diferente en el desarrollo de arquitecturas de control para robots humanoides. En concreto, esta Tesis se centra en el estudio del sistema de control postural humano y en la aplicación de este conocimiento en el desarrollo de una nueva arquitectura de control postural para robots humanoides. El estudio realizado en esta Tesis pone de manifiesto la existencia de una componente de control postural reactiva y otra predictiva o anticipativa. Este trabajo se ha centrado en el desarrollo de la segunda componente mediante la implementación de un sistema predictivo que complemente al sistema reactivo. El sistema de control anticipativo ha sido estudiado en el caso humano y extrapolado para la arquitectura de control del robot humanoide TEO. De este modo, sus diferentes componentes han sido desarrollados inspirándose en el funcionamiento humano y considerando las tareas para las que dicho robot ha sido concebido. Dicho sistema está basado en la composición de percepciones sensoriales, la evaluación de los estímulos mediante el uso de la teoría psicofísica de la sorpresa y la generación de eventos que sirvan para activar estrategias de reacción (sinergias). El sistema de control desarrollado en esta Tesis, al igual que el ser humano, procesa información de múltiples fuentes sensoriales y compone las denominadas percepciones, que dependen del tipo de tarea sobre la que actúa el control postural. El valor de estas percepciones es obtenido utilizando técnicas de evaluación bioinspiradas de inferencia sensorial. Una vez la entrada sensorial ha sido obtenida, es necesario procesarla para prever posibles perturbaciones que puedan ocasionar una incorrecta realización de una tarea. El sistema desarrollado en esta Tesis evalúa la información sensorial, previamente transformada en percepciones, mediante la ‘Teoría de la Sorpresa’ y genera eventos llamados ‘sorpresas’ que sirven para predecir la evolución de una tarea. Por último, el sistema anticipativo de control postural puede componer, si fuese necesario, las reacciones adecuadas mediante el uso de patrones de movimientos predefinidos llamados sinergias. Dichas reacciones pueden complementar o sustituir por completo la ejecución normal de una tarea. El funcionamiento del sistema anticipativo de control postural y de cada uno de sus componentes ha sido probado tanto por medio de simulaciones como por su aplicación en el robot humanoide TEO del grupo de investigación RoboticsLab en el Departamento de Ingeniería de Sistemas y Automática de la Universidad Carlos III de Madrid

    A comprehensive gaze stabilization controller based on cerebellar internal models

    Get PDF
    Gaze stabilization is essential for clear vision; it is the combined effect of two reflexes relying on vestibular inputs: the vestibulocollic reflex (VCR), which stabilizes the head in space and the vestibulo-ocular reflex (VOR), which stabilizes the visual axis to minimize retinal image motion. The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows the eye to move at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work, we implement on a humanoid robot a model of gaze stabilization based on the coordination of VCR, VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot and on the iCub simulator, validating the robustness of the proposed control method. The first set of experiments focused on the controller response to a set of disturbance frequencies along the vertical plane. The second shows the performances of the system under three-dimensional disturbances. The last set of experiments was carried out to test the capability of the proposed model to stabilize the gaze in locomotion tasks. The results confirm that the proposed model is beneficial in all cases reducing the retinal slip (velocity of the image on the retina) and keeping the orientation of the head stable

    Modeling of human movement for the generation of humanoid robot motion

    Get PDF
    La robotique humanoïde arrive a maturité avec des robots plus rapides et plus précis. Pour faire face à la complexité mécanique, la recherche a commencé à regarder au-delà du cadre habituel de la robotique, vers les sciences de la vie, afin de mieux organiser le contrôle du mouvement. Cette thèse explore le lien entre mouvement humain et le contrôle des systèmes anthropomorphes tels que les robots humanoïdes. Tout d’abord, en utilisant des méthodes classiques de la robotique, telles que l’optimisation, nous étudions les principes qui sont à la base de mouvements répétitifs humains, tels que ceux effectués lorsqu’on joue au yoyo. Nous nous concentrons ensuite sur la locomotion en nous inspirant de résultats en neurosciences qui mettent en évidence le rôle de la tête dans la marche humaine. En développant une interface permettant à un utilisateur de commander la tête du robot, nous proposons une méthode de contrôle du mouvement corps-complet d’un robot humanoïde, incluant la production de pas et permettant au corps de suivre le mouvement de la tête. Cette idée est poursuivie dans l’étude finale dans laquelle nous analysons la locomotion de sujets humains, dirigée vers une cible, afin d’extraire des caractéristiques du mouvement sous forme invariants. En faisant le lien entre la notion “d’invariant” en neurosciences et celle de “tâche cinématique” en robotique humanoïde, nous développons une méthode pour produire une locomotion réaliste pour d’autres systèmes anthropomorphes. Dans ce cas, les résultats sont illustrés sur le robot humanoïde HRP2 du LAAS-CNRS. La contribution générale de cette thèse est de montrer que, bien que la planification de mouvement pour les robots humanoïdes peut être traitée par des méthodes classiques de robotique, la production de mouvements réalistes nécessite de combiner ces méthodes à l’observation systématique et formelle du comportement humain. ABSTRACT : Humanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior

    Fall Prediction and Controlled Fall for Humanoid Robots

    Get PDF
    Humanoids which resemble humans in their body structure and degrees of freedom are anticipated to work like them within infrastructures and environments constructed for humans. In such scenarios, even humans who have exceptional manipulation, balancing, and locomotion skills are vulnerable to fall, humanoids being their approximate imitators are no exception to this. Furthermore, their high center of gravity position in relation to their small support polygon makes them more prone to fall, unlike other robots such as quadrupeds. The consequences of these falls are so devastating that it can instantly annihilate both the robot and its surroundings. This has become one of the major stumbling blocks which humanoids have to overcome to operate in real environments. As a result, in this thesis, we have strived to address the imminent fall over of humanoids by developing different control techniques. The fall over problem as such can be divided into three subissues: fall prediction, controlled fall, and its recovery. In the presented work, the first two issues have been addressed, and they are presented in three parts. First, we define what is fall over for humanoids, different sources for it to happen, the effect fall over has both on the robot and to its surroundings, and how to deal with them. Following which, we give a brief introduction to the overall system which includes both the hardware and software components which have been used throughout the work for varied purposes. Second, the first sub-issue is addressed by proposing a generic method to predict the falling over of humanoid robots in a reliable, robust, and agile manner across various terrains, and also amidst arbitrary disturbances. The aforementioned characteristics are strived to attain by proposing a prediction principle inspired by the human balance sensory systems. Accordingly, the fusion of multiple sensors such as inertial measurement unit and gyroscope (IMU), foot pressure sensor (FPS), joint encoders, and stereo vision sensor, which are equivalent to the human\u2019s vestibular, proprioception, and vision systems are considered. We first define a set of feature-based fall indicator variables (FIVs) from the different sensors, and the thresholds for those FIVs are extracted analytically for four major disturbance scenarios. Further, an online threshold interpolation technique and an impulse adaptive counter limit are proposed to manage more generic disturbances. For the generalized prediction process, both the instantaneous and cumulative sum of each FIVs are normalized, and a suitable value is set as the critical limit to predict the fall over. To determine the best combination and the usefulness of multiple sensors, the prediction performance is evaluated on four different types of terrains, in three unique combinations: first, each feature individually with their respective FIVs; second, an intuitive performance based (PF); and finally, Kalman filter based (KF) techniques, which involve the usage of multiple features. For PF and KF techniques, prediction performance evaluations are carried out with and without adding noise. Overall, it is reported that KF performs better than PF and individual sensor features under different conditions. Also, the method\u2019s ability to predict fall overs during the robot\u2019s simple dynamic motion is also tested and verified through simulations. Experimental verification of the proposed prediction method on flat and uneven terrains was carried out with the WALK-MAN humanoid robot. Finally, in reference to the second sub-issue, i.e., the controlled fall, we propose two novel fall control techniques based on energy concepts, which can be applied online to mitigate the impact forces incurred during the falling over of humanoids. Both the techniques are inspired by the break-fall motions, in particular, Ukemi motion practiced by martial arts people. The first technique reduces the total energy using a nonlinear control tool, called energy shaping (ES) and further distributes the reduced energy over multiple contacts by means of energy distribution polygons (EDP). We also include an effective orientation control to safeguard the end-effectors in the event of ground impacts. The performance of the proposed method is numerically evaluated by dynamic simulations under the sudden falling over scenario of the humanoid robot for both lateral and sagittal falls. The effectiveness of the proposed ES and EDP concepts are verified by diverse comparative simulations regarding total energy, distribution, and impact forces. Following the first technique, we proposed another controller to generate an online rolling over motion based on the hypothesis that multi-contact motions can reduce the impact forces even further. To generate efficient rolling motion, critical parameters are defined by the insights drawn from a study on rolling, which are contact positions and attack angles. In addition, energy-injection velocity is proposed as an auxiliary rolling parameter to ensure sequential multiple contacts in rolling. An online rolling controller is synthesized to compute the optimal values of the rolling parameters. The first two parameters are to construct a polyhedron, by selecting suitable contacts around the humanoid\u2019s body. This polyhedron distributes the energy gradually across multiple contacts, thus called energy distribution polyhedron. The last parameter is to inject some additional energy into the system during the fall, to overcome energy drought and tip over successive contacts. The proposed controller, incorporated with energy injection, minimization, and distribution techniques result in a rolling like motion and significantly reduces the impact forces, and it is verified in numerical experiments with a segmented planar robot and a full humanoid model

    On the mechanical contribution of head stabilization to passive dynamics of anthropometric walkers

    Get PDF
    During the steady gait, humans stabilize their head around the vertical orientation. While there are sensori-cognitive explanations for this phenomenon, its mechanical e fect on the body dynamics remains un-explored. In this study, we take profit from the similarities that human steady gait share with the locomotion of passive dynamics robots. We introduce a simplified anthropometric D model to reproduce a broad walking dynamics. In a previous study, we showed heuristically that the presence of a stabilized head-neck system significantly influences the dynamics of walking. This paper gives new insights that lead to understanding this mechanical e fect. In particular, we introduce an original cart upper-body model that allows to better understand the mechanical interest of head stabilization when walking, and we study how this e fect is sensitive to the choice of control parameters

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot's view in order to explore interaction possibilities of the scene
    corecore