62 research outputs found

    Body models in humans, animals, and robots: mechanisms and plasticity

    Full text link
    Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent - yet, as is so often the case, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed. In the biological realm, evidence has been accumulated by diverse disciplines giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables to control the machine. In this article I compare the character of body representations in biology with their robotic counterparts and relate that to the differences in performance that we observe. I put forth a number of axes regarding the nature of such body models: fixed vs. plastic, amodal vs. modal, explicit vs. implicit, serial vs. parallel, modular vs. holistic, and centralized vs. distributed. An interesting trend emerges: on many of the axes, there is a sequence from robot body models, over body image, body schema, to the body representation in lower animals like the octopus. In some sense, robots have a lot in common with Ian Waterman - "the man who lost his body" - in that they rely on an explicit, veridical body model (body image taken to the extreme) and lack any implicit, multimodal representation (like the body schema) of their bodies. I will then detail how robots can inform the biological sciences dealing with body representations and finally, I will study which of the features of the "body in the brain" should be transferred to robots, giving rise to more adaptive and resilient, self-calibrating machines.Comment: 27 pages, 8 figure

    Self-Contained and Automatic Calibration of a Multi-Fingered Hand Using Only Pairwise Contact Measurements

    Get PDF
    A self-contained calibration procedure that can be performed automatically without additional external sensors or tools is a significant advantage, especially for complex robotic systems. Here, we show that the kinematics of a multi-fingered robotic hand can be precisely calibrated only by moving the tips of the fingers pairwise into contact. The only prerequisite for this is sensitive contact detection, e.g., by torque-sensing in the joints (as in our DLR-Hand II) or tactile skin. The measurement function for a given joint configuration is the distance between the modeled fingertip geometries, but the actual measurement is always zero. In an in-depth analysis, we prove that this contact-based calibration determines all quantities needed for manipulating objects with the hand, i.e., the difference vectors of the fingertips, and that it is as sensitive as a calibration using an external visual tracking system and markers. We describe the complete calibration scheme, including the selection of optimal sample joint configurations and search motions for the contacts despite the initial kinematic uncertainties. In a real-world calibration experiment for the torque-controlled four-fingered DLR-Hand II, the maximal error of 17.7mm can be reduced to only 3.7mm.Comment: Presented at the 2023 IEEE-RAS International Conference on Humanoid Robot

    Improving Dynamics Estimations and Low Level Torque Control Through Inertial Sensing

    Get PDF
    In 1996, professors J. Edward Colgate and Michael Peshkin invented the cobots as robotic equipment safe enough for interacting with human workers. Twenty years later, collaborative robots are highly demanded in the packaging industry, and have already been massively adopted by companies facing issues for meeting customer demands. Meantime, cobots are still making they way into environments where value-added tasks require more complex interactions between robots and human operators. For other applications like a rescue mission in a disaster scenario, robots have to deal with highly dynamic environments and uneven terrains. All these applications require robust, fine and fast control of the interaction forces, specially in the case of locomotion on uneven terrains in an environment where unexpected events can occur. Such interaction forces can only be modulated through the control of joint internal torques in the case of under-actuated systems which is typically the case of mobile robots. For that purpose, an efficient low level joint torque control is one of the critical requirements, and motivated the research presented here. This thesis addresses a thorough model analysis of a typical low level joint actuation sub-system, powered by a Brushless DC motor and suitable for torque control. It then proposes procedure improvements in the identification of model parameters, particularly challenging in the case of coupled joints, in view of improving their control. Along with these procedures, it proposes novel methods for the calibration of inertial sensors, as well as the use of such sensors in the estimation of joint torques

    Learning body models: from humans to humanoids

    Full text link
    Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent. Yet, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed. The mechanisms of operation of body models in the brain are largely unknown and even less is known about how they are constructed from experience after birth. In collaboration with developmental psychologists, we conducted targeted experiments to understand how infants acquire first "sensorimotor body knowledge". These experiments inform our work in which we construct embodied computational models on humanoid robots that address the mechanisms behind learning, adaptation, and operation of multimodal body representations. At the same time, we assess which of the features of the "body in the brain" should be transferred to robots to give rise to more adaptive and resilient, self-calibrating machines. We extend traditional robot kinematic calibration focusing on self-contained approaches where no external metrology is needed: self-contact and self-observation. Problem formulation allowing to combine several ways of closing the kinematic chain simultaneously is presented, along with a calibration toolbox and experimental validation on several robot platforms. Finally, next to models of the body itself, we study peripersonal space - the space immediately surrounding the body. Again, embodied computational models are developed and subsequently, the possibility of turning these biologically inspired representations into safe human-robot collaboration is studied.Comment: 34 pages, 5 figures. Habilitation thesis, Faculty of Electrical Engineering, Czech Technical University in Prague (2021

    Sensorimotor representation learning for an "active self" in robots: A model survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyse what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration

    Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey

    Get PDF
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Deutsche Forschungsgemeinschaft http://dx.doi.org/10.13039/501100001659Projekt DEALPeer Reviewe

    Calibration of an Elastic Humanoid Upper Body and Efficient Compensation for Motion Planning

    Full text link
    High absolute accuracy is an essential prerequisite for a humanoid robot to autonomously and robustly perform manipulation tasks while avoiding obstacles. We present for the first time a kinematic model for a humanoid upper body incorporating joint and transversal elasticities. These elasticities lead to significant deformations due to the robot's own weight, and the resulting model is implicitly defined via a torque equilibrium. We successfully calibrate this model for DLR's humanoid Agile Justin, including all Denavit-Hartenberg parameters and elasticities. The calibration is formulated as a combined least-squares problem with priors and based on measurements of the end effector positions of both arms via an external tracking system. The absolute position error is massively reduced from 21mm to 3.1mm on average in the whole workspace. Using this complex and implicit kinematic model in motion planning is challenging. We show that for optimization-based path planning, integrating the iterative solution of the implicit model into the optimization loop leads to an elegant and highly efficient solution. For mildly elastic robots like Agile Justin, there is no performance impact, and even for a simulated highly flexible robot with 20 times higher elasticities, the runtime increases by only 30%

    Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    Get PDF
    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement

    Adaptive sensorimotor peripersonal space representation and motor learning for a humanoid robot

    Get PDF
    This thesis presents possible computational mechanisms by which a humanoid robot can develop a coherent representation of the space within its reach (its peripersonal space), and use it to control its movements. Those mechanisms are inspired by current theories of peripersonal space representation and motor control in humans, targeting a cross-fertilization between robotics on one side, and cognitive science on the other side. This research addresses the issue of adaptivity the sensorimotor level, at the control level and at the level of simple task learning. First, this work considers the concept of body schema and suggests a computational translation of this concept, appropriate for controlling a humanoid robot. This model of the body schema is adaptive and evolves as a result of the robot sensory experience. It suggests new avenues for understanding various psychophysical and neuropsychological phenomenons of human peripersonal space representation such as adaptation to distorted vision and tool use, fake limbs experiments, body-part centered receptive fields, and multimodal neurons. Second, it is shown how the motor modality can be added to the body schema. The suggested controller is inspired by the dynamical system theory of motor control and allows the robot to simultaneously and robustly control its limbs in joint angles space and in end-effector location space. This amounts to controlling the robot in both proprioceptive and visual modalities. This multimodal control can benefit from the advantages offered by each modality and is better than traditional robotic controllers in several respects. It offers a simple and elegant solution to the singularity and joint limit avoidance problems and can be seen as a generalization of the Damped Least Square approach to robot control. The controller exhibits several properties of human reaching movements, such as quasi-straight hand paths and bell-shaped velocity profiles and non-equifinality. In a third step, the motor modalities is endowed with a statistical learning mechanism, based on Gaussian Mixture Models, that enables the humanoid to learn motor primitives from demonstrations. The robot is thus able to learn simple manipulation tasks and generalize them to various context, in a way that is robust to perturbations occurring during task execution. In addition to simulation results, the whole model has been implemented and validated on two humanoid robots, the Hoap3 and the iCub, enabling them to learn their arm and head geometries, perform reaching movements, adapt to unknown tools, and visual distortions, and learn simple manipulation tasks in a smooth, robust and adaptive way. Finally, this work hints at possible computational interpretations of the concepts of body schema, motor perception and motor primitives
    • …
    corecore