133 research outputs found

    Unifying nonholonomic and holonomic behaviors in human locomotion

    Get PDF
    Our motivation is to understand human locomotion to better control locomotion of virtual systems (robots and mannequins). Human locomotion has been studied so far in different disciplines. We consider locomotion as the level of a body frame (in direction and orientation) instead of the complexity of many kinematic joints systems as other approaches. Our approach concentrates on the computational foundation of human locomotion. The ultimate goal is to find a model that explains the shape of human locomotion in space. To do that, we first base on the behavior of trajectories on the ground during intentional locomotion. When human walk, they put one foot in front of the other and consequently, the direction of motion is deduced by the body orientation. That’s what we called the nonholonomic behavior hypothesis. However, in the case of a sideward step, the body orientation is not coupled to the tangential direction of the trajectory, and the hypothesis is no longer validated. The behavior of locomotion becomes holonomic. The aim of this thesis is to distinguish these two behaviors and to exploit them in neuroscience, robotics and computer animation. The first part of the thesis is to determine the configurations of the holonomic behavior by an experimental protocol and an original analytical tool segmenting the nonholonomic and holonomic behaviors of any trajectory. In the second part, we present a model unifying nonholonomic and holonomic behaviors. This model combines three velocities generating human locomotion: forward, angular and lateral. The experimental data in the first part are used in an inverse optimal control approach to find a multi-objective function which produces calculated trajectories as those of natural human locomotion. The last part is the application that uses the two behaviors to synthesize human locomotion in computer animation. Each locomotion is characterized by three velocities and is therefore considered as a point in 3D control space (of three speeds). We collected a library that contains locomotions at different velocities - points in 3D space. These points are structured in a tetrahedra cloud. When a desired speed is given, it is projected into the 3D space and we find the corresponding tetrahedron that contains it. The new animation is interpolated by four locomotions corresponding to four vertices of the selected tetrahedron. We exhibit several animation scenarios on a virtual character

    Motor Control Insights on Walking Planner and its Stability

    Full text link
    The application of biomechanic and motor control models in the control of bidedal robots (humanoids, and exoskeletons) has revealed limitations of our understanding of human locomotion. A recently proposed model uses the potential energy for bipedal structures to model the bipedal dynamics, and it allows to predict the system dynamics from its kinematics. This work proposes a task-space planner for human-like straight locomotion that target application of in rehabilitation robotics and computational neuroscience. The proposed architecture is based on the potential energy model and employs locomotor strategies from human data as a reference for human behaviour. The model generates Centre of Mass (CoM) trajectories, foot swing trajectories and the Base of Support (BoS) over time. The data show that the proposed architecture can generate behaviour in line with human walking strategies for both the CoM and the foot swing. Despite the CoM vertical trajectory being not as smooth as a human trajectory, yet the proposed model significantly reduces the error in the estimation of the CoM vertical trajectory compared to the inverted pendulum models. The proposed model is also able to asses the stability based on the body kinematics embedding in currently used in the clinical practice. However, the model also implies a shift in the interpretation of the spatiotemporal parameters of the gait, which are now determined by the conditions for the equilibrium and not \textit{vice versa}. In other words, locomotion is a dynamic reaching where the motor primitives are also determined by gravity

    Dynamic Simulation and Neuromechanical Coordination of Subject-Specific Balance Recovery to Prevent Falls

    Get PDF
    Falls are the leading cause of fatal and nonfatal injuries in elderly people, resulting in approximately $31 billion in medical costs annually in the U.S. These injuries motivate balance control studies focused on improving stability by identifying prevention strategies for reducing the number of fall events. Experiments provide data about subjects’ kinematic response to loss of balance. However, simulations offer additional insights, and may be used to make predictions about functional outcomes of interventions. Several approaches already exist in biomechanics research to generate accurate models on a subject-by-subject basis. However, these representations typically lack models of the central nervous system, which provides essential feedback that humans use to make decisions and alter movements. Interdisciplinary methods that merge biomechanics with other fields of study may be the solution to fill this gap by developing models that accurately reflect human neuromechanics.Roboticists have developed control systems approaches for humanoid robots simultaneously accomplishing complex goals by coordinating component tasks under priority constraints. Concepts such as the zero-moment point and extrapolated center of mass have been thoroughly evaluated and are commonly used in the design and execution of dynamic robotic systems in order to maintain stability. These established techniques can benefit biomechanical simulations by replacing biological sensory feedback that is unavailable in the virtual environment. Subject-specific simulations can be generated by synthesizing techniques from both robotics and biomechanics and by creating comprehensive models of task-level coordination, including neurofeedback, of movement patterns from experimental data. In this work, we demonstrate how models built on robotic principles that emulate decision making in response to feedback can be trained by biomechanical motion capture data to produce a subject-specific fit. The resulting surrogate can predict a subject’s particular solution to accomplishing the movement goal of recovering balance by controlling component tasks. This research advances biomechanics simulations as we move closer towards the development of a tool capable of anticipating the results of rehabilitation interventions aimed at correcting movement disorders. The novel platform presented here marks the first step towards that goal, and may benefit engineers, researchers, and clinicians interested in balance control and falls in human subjects

    Gaze control modelling and robotic implementation

    Get PDF
    Although we have the impression that we can process the entire visual field in a single fixation, in reality we would be unable to fully process the information outside of foveal vision if we were unable to move our eyes. Because of acuity limitations in the retina, eye movements are necessary for processing the details of the array. Our ability to discriminate fine detail drops off markedly outside of the fovea in the parafovea (extending out to about 5 degrees on either side of fixation) and in the periphery (everything beyond the parafovea). While we are reading or searching a visual array for a target or simply looking at a new scene, our eyes move every 200-350 ms. These eye movements serve to move the fovea (the high resolution part of the retina encompassing 2 degrees at the centre of the visual field) to an area of interest in order to process it in greater detail. During the actual eye movement (or saccade), vision is suppressed and new information is acquired only during the fixation (the period of time when the eyes remain relatively still). While it is true that we can move our attention independently of where the eyes are fixated, it does not seem to be the case in everyday viewing. The separation between attention and fixation is often attained in very simple tasks; however, in tasks like reading, visual search, and scene perception, covert attention and overt attention (the exact eye location) are tightly linked. Because eye movements are essentially motor movements, it takes time to plan and execute a saccade. In addition, the end-point is pre-selected before the beginning of the movement. There is considerable evidence that the nature of the task influences eye movements. Depending on the task, there is considerable variability both in terms of fixation durations and saccade lengths. It is possible to outline five separate movement systems that put the fovea on a target and keep it there. Each of these movement systems shares the same effector pathway—the three bilateral groups of oculomotor neurons in the brain stem. These five systems include three that keep the fovea on a visual target in the environment and two that stabilize the eye during head movement. Saccadic eye movements shift the fovea rapidly to a visual target in the periphery. Smooth pursuit movements keep the image of a moving target on the fovea. Vergence movements move the eyes in opposite directions so that the image is positioned on both foveae. Vestibulo-ocular movements hold images still on the retina during brief head movements and are driven by signals from the vestibular system. Optokinetic movements hold images during sustained head rotation and are driven by visual stimuli. All eye movements but vergence movements are conjugate: each eye moves the same amount in the same direction. Vergence movements are disconjugate: The eyes move in different directions and sometimes by different amounts. Finally, there are times that the eye must stay still in the orbit so that it can examine a stationary object. Thus, a sixth system, the fixation system, holds the eye still during intent gaze. This requires active suppression of eye movement. Vision is most accurate when the eyes are still. When we look at an object of interest a neural system of fixation actively prevents the eyes from moving. The fixation system is not as active when we are doing something that does not require vision, for example, mental arithmetic. Our eyes explore the world in a series of active fixations connected by saccades. The purpose of the saccade is to move the eyes as quickly as possible. Saccades are highly stereotyped; they have a standard waveform with a single smooth increase and decrease of eye velocity. Saccades are extremely fast, occurring within a fraction of a second, at speeds up to 900°/s. Only the distance of the target from the fovea determines the velocity of a saccadic eye movement. We can change the amplitude and direction of our saccades voluntarily but we cannot change their velocities. Ordinarily there is no time for visual feedback to modify the course of the saccade; corrections to the direction of movement are made in successive saccades. Only fatigue, drugs, or pathological states can slow saccades. Accurate saccades can be made not only to visual targets but also to sounds, tactile stimuli, memories of locations in space, and even verbal commands (“look left”). The smooth pursuit system keeps the image of a moving target on the fovea by calculating how fast the target is moving and moving the eyes accordingly. The system requires a moving stimulus in order to calculate the proper eye velocity. Thus, a verbal command or an imagined stimulus cannot produce smooth pursuit. Smooth pursuit movements have a maximum velocity of about 100°/s, much slower than saccades. The saccadic and smooth pursuit systems have very different central control systems. A coherent integration of these different eye movements, together with the other movements, essentially corresponds to a gating-like effect on the brain areas controlled. The gaze control can be seen in a system that decides which action should be enabled and which should be inhibited and in another that improves the action performance when it is executed. It follows that the underlying guiding principle of the gaze control is the kind of stimuli that are presented to the system, by linking therefore the task that is going to be executed. This thesis aims at validating the strong relation between actions and gaze. In the first part a gaze controller has been studied and implemented in a robotic platform in order to understand the specific features of prediction and learning showed by the biological system. The eye movements integration opens the problem of the best action that should be selected when a new stimuli is presented. The action selection problem is solved by the basal ganglia brain structures that react to the different salience values of the environment. In the second part of this work the gaze behaviour has been studied during a locomotion task. The final objective is to show how the different tasks, such as the locomotion task, imply the salience values that drives the gaze

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion

    Generating whole body movements for dynamics anthropomorphic systems under constraints

    Get PDF
    Cette thèse étudie la question de la génération de mouvements corps-complet pour des systèmes anthropomorphes. Elle considère le problème de la modélisation et de la commande en abordant la question difficile de la génération de mouvements ressemblant à ceux de l'homme. En premier lieu, un modèle dynamique du robot humanoïde HRP-2 est élaboré à partir de l'algorithme récursif de Newton-Euler pour les vecteurs spatiaux. Un nouveau schéma de commande dynamique est ensuite développé, en utilisant une cascade de programmes quadratiques (QP) optimisant des fonctions coûts et calculant les couples de commande en satisfaisant des contraintes d'égalité et d'inégalité. La cascade de problèmes quadratiques est définie par une pile de tâches associée à un ordre de priorité. Nous proposons ensuite une formulation unifiée des contraintes de contacts planaires et nous montrons que la méthode proposée permet de prendre en compte plusieurs contacts non coplanaires et généralise la contrainte usuelle du ZMP dans le cas où seulement les pieds sont en contact avec le sol. Nous relions ensuite les algorithmes de génération de mouvement issus de la robotique aux outils de capture du mouvement humain en développant une méthode originale de génération de mouvement visant à imiter le mouvement humain. Cette méthode est basée sur le recalage des données capturées et l'édition du mouvement en utilisant le solveur hiérarchique précédemment introduit et la définition de tâches et de contraintes dynamiques. Cette méthode originale permet d'ajuster un mouvement humain capturé pour le reproduire fidèlement sur un humanoïde en respectant sa propre dynamique. Enfin, dans le but de simuler des mouvements qui ressemblent à ceux de l'homme, nous développons un modèle anthropomorphe ayant un nombre de degrés de liberté supérieur à celui du robot humanoïde HRP2. Le solveur générique est utilisé pour simuler le mouvement sur ce nouveau modèle. Une série de tâches est définie pour décrire un scénario joué par un humain. Nous montrons, par une simple analyse qualitative du mouvement, que la prise en compte du modèle dynamique permet d'accroitre naturellement le réalisme du mouvement.This thesis studies the question of whole body motion generation for anthropomorphic systems. Within this work, the problem of modeling and control is considered by addressing the difficult issue of generating human-like motion. First, a dynamic model of the humanoid robot HRP-2 is elaborated based on the recursive Newton-Euler algorithm for spatial vectors. A new dynamic control scheme is then developed adopting a cascade of quadratic programs (QP) optimizing the cost functions and computing the torque control while satisfying equality and inequality constraints. The cascade of the quadratic programs is defined by a stack of tasks associated to a priority order. Next, we propose a unified formulation of the planar contact constraints, and we demonstrate that the proposed method allows taking into account multiple non coplanar contacts and generalizes the common ZMP constraint when only the feet are in contact with the ground. Then, we link the algorithms of motion generation resulting from robotics to the human motion capture tools by developing an original method of motion generation aiming at the imitation of the human motion. This method is based on the reshaping of the captured data and the motion editing by using the hierarchical solver previously introduced and the definition of dynamic tasks and constraints. This original method allows adjusting a captured human motion in order to reliably reproduce it on a humanoid while respecting its own dynamics. Finally, in order to simulate movements resembling to those of humans, we develop an anthropomorphic model with higher number of degrees of freedom than the one of HRP-2. The generic solver is used to simulate motion on this new model. A sequence of tasks is defined to describe a scenario played by a human. By a simple qualitative analysis of motion, we demonstrate that taking into account the dynamics provides a natural way to generate human-like movements

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Computer vision based behavior analysis

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Ph. D.) -- Bilkent University, 2009.Includes bibliographical references leaves 111-124.In this thesis, recognition and understanding of behavior based on visual inputs and automated decision schemes are investigated. Behavior analysis is carried out on a wide scope ranging from animal behavior to human behavior. Due to this extensive coverage, we present our work in two main parts. Part I of the thesis investigates locomotor behavior of lab animals with particular focus on drug screening experiments, and Part II investigates analysis of behavior in humans, with specific focus on visual attention. The animal behavior analysis method presented in Part I, is composed of motion tracking based on background subtraction, determination of discriminative behavioral characteristics from the extracted path and speed information, summarization of these characteristics in terms of feature vectors and classification of feature vectors. The experiments presented in Part I indicate that the proposed animal behavior analysis system proves very useful in behavioral and neuropharmacological studies as well as in drug screening and toxicology studies. This is due to the superior capability of the proposed method in detecting discriminative behavioral alterations in response to pharmacological manipulations. The human behavior analysis scheme presented in Part II proposes an efficient method to resolve attention fixation points in unconstrained settings adopting a developmental psychology point of view. The head of the experimenter is modeled as an elliptic cylinder. The head model is tracked using Lucas-Kanade optical flow method and the pose values are estimated accordingly. The resolved poses are then transformed into the gaze direction and the depth of the attended object through two Gaussian regressors. The regression outputs are superposed to find the initial estimates for object center locations. These estimates are pooled to mimic human saccades realistically and saliency is computed in the prospective region to determine the final estimates for attention fixation points. Verifying the extensive generalization capabilities of the human behavior analysis method given in Part II, we propose that rapid gaze estimation can be achieved for establishing joint attention in interaction-driven robot communication as well.YĂĽcel, ZeynepPh.D

    Modeling of human movement for the generation of humanoid robot motion

    Get PDF
    La robotique humanoïde arrive a maturité avec des robots plus rapides et plus précis. Pour faire face à la complexité mécanique, la recherche a commencé à regarder au-delà du cadre habituel de la robotique, vers les sciences de la vie, afin de mieux organiser le contrôle du mouvement. Cette thèse explore le lien entre mouvement humain et le contrôle des systèmes anthropomorphes tels que les robots humanoïdes. Tout d’abord, en utilisant des méthodes classiques de la robotique, telles que l’optimisation, nous étudions les principes qui sont à la base de mouvements répétitifs humains, tels que ceux effectués lorsqu’on joue au yoyo. Nous nous concentrons ensuite sur la locomotion en nous inspirant de résultats en neurosciences qui mettent en évidence le rôle de la tête dans la marche humaine. En développant une interface permettant à un utilisateur de commander la tête du robot, nous proposons une méthode de contrôle du mouvement corps-complet d’un robot humanoïde, incluant la production de pas et permettant au corps de suivre le mouvement de la tête. Cette idée est poursuivie dans l’étude finale dans laquelle nous analysons la locomotion de sujets humains, dirigée vers une cible, afin d’extraire des caractéristiques du mouvement sous forme invariants. En faisant le lien entre la notion “d’invariant” en neurosciences et celle de “tâche cinématique” en robotique humanoïde, nous développons une méthode pour produire une locomotion réaliste pour d’autres systèmes anthropomorphes. Dans ce cas, les résultats sont illustrés sur le robot humanoïde HRP2 du LAAS-CNRS. La contribution générale de cette thèse est de montrer que, bien que la planification de mouvement pour les robots humanoïdes peut être traitée par des méthodes classiques de robotique, la production de mouvements réalistes nécessite de combiner ces méthodes à l’observation systématique et formelle du comportement humain. ABSTRACT : Humanoid robotics is coming of age with faster and more agile robots. To compliment the physical complexity of humanoid robots, the robotics algorithms being developed to derive their motion have also become progressively complex. The work in this thesis spans across two research fields, human neuroscience and humanoid robotics, and brings some ideas from the former to aid the latter. By exploring the anthropological link between the structure of a human and that of a humanoid robot we aim to guide conventional robotics methods like local optimization and task-based inverse kinematics towards more realistic human-like solutions. First, we look at dynamic manipulation of human hand trajectories while playing with a yoyo. By recording human yoyo playing, we identify the control scheme used as well as a detailed dynamic model of the hand-yoyo system. Using optimization this model is then used to implement stable yoyo-playing within the kinematic and dynamic limits of the humanoid HRP-2. The thesis then extends its focus to human and humanoid locomotion. We take inspiration from human neuroscience research on the role of the head in human walking and implement a humanoid robotics analogy to this. By allowing a user to steer the head of a humanoid, we develop a control method to generate deliberative whole-body humanoid motion including stepping, purely as a consequence of the head movement. This idea of understanding locomotion as a consequence of reaching a goal is extended in the final study where we look at human motion in more detail. Here, we aim to draw to a link between “invariants” in neuroscience and “kinematic tasks” in humanoid robotics. We record and extract stereotypical characteristics of human movements during a walking and grasping task. These results are then normalized and generalized such that they can be regenerated for other anthropomorphic figures with different kinematic limits than that of humans. The final experiments show a generalized stack of tasks that can generate realistic walking and grasping motion for the humanoid HRP-2. The general contribution of this thesis is in showing that while motion planning for humanoid robots can be tackled by classical methods of robotics, the production of realistic movements necessitate the combination of these methods with the systematic and formal observation of human behavior
    • …
    corecore