11 research outputs found

    Multi-physics Modelling of a Compliant Humanoid Robot

    Get PDF
    In this paper, we discuss some very important features for getting exploitable simulation results for multibody systems, relying on the example of a humanoid robot. First, we provide a comparison of simulation speed and accuracy for kinematics modeling relying on relative vs. absolute coor- dinates. This choice is particularly critical for mechanisms with long serial chains (e.g. legs and arms). Compliance in the robot actuation chain is also critical to enhance the robot safety and en- ergy efficiency, but makes the simulator more sensitive to modeling errors. Therefore, our second contribution is to derive the full electro-mechanical model of the inner dynamics of the compliant actuators embedded in our robot. Finally, we report our reasoning for choosing an appropriate contact library. The recommended solution is to couple our simulator with an open-source contact library offering both accurate and fast full-body contact modeling

    Gaze stabilization of humanoid robots based on internal model

    No full text
    Robotics and more particularly humanoid robots are envisioned as a solution to support humans in dangerous, repetitive and demeaning tasks. Indeed, humanoid robots, with their human shape, are perfectly tailored to integrate our environment and use tools originally designed for humans. However, unstabilized vision of such robots severely degrades their perception, thus preventing them to autonomously operate in unknown environments. In this context, gaze stabilization emerged as a promising way to overcome this limitation. It consists in actively controlling the motors of the robot head and eyes in order to stabilize the perceived images. In this thesis, driven by bio-inspiration, we explore the potential use of the concept of internal model to support humanoid robot gaze stabilization. This concept from neuroscience, internally simulating the sensorimotor system, is indeed known to play a central role in human motor control and could also benefit to robot control. This doctoral dissertation starts by describing the tools developed to implement and test gaze stabilization controllers. More precisely, dynamic modelling of humanoid robots and middleware based software architecture are addressed. After that, two beneficial uses of internal model for robotic gaze stabilization are demonstrated. First, an anticipatory gaze stabilization based on the concept of virtual linkage is proposed. Then, a multimodal control scheme based on the reafference principle is presented. It complements the first controller with visual and inertial reflexes.(FSA - Sciences de l'ingénieur) -- UCL, 201

    Gaze Stabilization of a Humanoid Robot based on Virtual Joints

    No full text
    This abstract presents a gaze stabilization controller for humanoid robots using a virtual linkage method based on a criterion minimizing the optical flow. The neck and eye joints are controlled to stabilize a fixation point in the field of view. Possible motions of the joints are specified by a virtual spherical arm between the robot eye and the fixation point. This makes the control problem redundant and allows optimization. An estimation of the optical flow based on the kinematic of the virtual linkage model is presented. It permits to solve the redundancy with a criterion based on the minimization of this optical flow

    Robotran-Yarp interface: a framework for real-time controller development based on multibody dynamics simulation

    Get PDF
    Multibody dynamics simulation is widely used for testing and prototyping controllers. However, the transfer of controllers initially developed in simulation to real mechatronics platforms, requires modifications of the code in order to interface with sensors and actuators. Because of this coupling with the hardware, the controller re-usability is severely impacted. In this work, we solve this issue by adding a middleware between the controller and the controlled platform (real or simulated). This framework decouples the controller from the hardware which allows fast controller development and eases collaborations on large scale projects. Moreover, it is then possible to simultaneously control the real and the simulated robot from a unique controller. This paper presents the interface of the Robotran multibody dynamics simulator with the YARP middleware. This framework is illustrated with applications on the COMAN and WALK-MAN humanoids robots

    Robotran-YARP interface: a framework for real-time controller developments based on multibody dynamics simulations

    No full text
    Multibody dynamics simulation is widely used for prototyping and testing controllers. However, the transfer of controllers initially developed in simulation to real mechatronics platforms requires updating the code in order to interface with physical sensors and actuators. Due to this strong coupling with specific hardware, the controller re-usability is often severely compromised. In the present contribution, we solve this issue by adding a middleware between the controller and the controlled platform (either real or simulated). This framework decouples the controller from the hardware, allows fast controller developments and eases collaborations on large scale projects. Moreover, it offers the possibility to simultaneously control the real and the simulated robot from a unique controller. This paper presents the interface of the Robotran dynamic simulator with the YARP middleware. Robotran leverages symbolic generation of the multibody equations to provide fast and accurate simulations of multibody systems. The speed and accuracy of Robotran make it possible to test real-time controllers in a realistic simulation environment. This framework is illustrated with applications using the COMAN and WALK-MAN humanoid robots

    Gaze stabilization of a humanoid robot based on virtual linkage

    No full text
    Gaze stabilization is a fundamental function for humanoid robots. Stabilizing the image being perceived facilitates the processing and thus the interpretation of visual data. In parallel, fixation should also guarantee that the visual target remains centered in the image. Several approaches exist to address the problem of gaze stabilization: closed-loop algorithms processing the visual data or inferring head movements from kinematic measurements, and feed-forward algorithms anticipating head movements from the lower-body commands. In this contribution, we develop a feed-forward controller addressing both image stabilization and target fixation into a unified framework. The addition of a virtual linkage between the robot eye and the visual target offers to elegantly rephrase the gaze control problem as the classical control of a redundant serial robot manipulator. Furthermore, a novel method to estimate the self-induced optical flow based on the robot kinematics - extended with this virtual linkage - is developed. It is then possible to solve the redundancy (i.e. guaranteeing target fixation) through a minimization of the optical flow (i.e. achieving image stabilization). This method is validated in simulation with a model of the head of the ARMAR IV humanoid. It is shown that the proposed controller offers to accurately estimate and minimize the optical flow, while keeping the visual target exactly in the center of the image

    Autonomous view selection and gaze stabilization for humanoid robots

    No full text
    To increase the autonomy of humanoid robots, the visual perception must support the efficient collection and interpretation of visual scene cues by providing task-dependent information. Active vision systems allow to extend the observable workspace by employing active gaze control, i.e. by shifting the gaze to relevant areas in the scene. When moving the eyes, stabilization of the camera images is crucial for successful task execution. In this paper, we present an active vision system for task-oriented selection of view directions and gaze stabilization to enable a humanoid robot to robustly perform vision-based tasks. We investigate the interaction between a gaze stabilization controller and view planning to select the next best view direction based on saliency maps which encode task-relevant information. We demonstrate the performance of the systems in a real world scenario, in which a humanoid robot is performing vision-based grasping while moving, a task that would not be possible without the combination of view selection and gaze stabilization

    Multimodal gaze stabilization of a humanoid robot based on reafferences

    No full text
    Gaze stabilization is fundamental for humanoid robots. By stabilizing vision, it enhances perception of the environment and keeps regions of interest inside the field of view. In this contribution, a multimodal gaze stabilization combining proprioceptive, inertial and visual cues is introduced. It integrates a classical inverse kinematic control with vestibulo-ocular and optokinetic reflexes. Inspired by neuroscience, our contribution implements a forward internal model that modulates the reflexes based on the reafference principle. This principle filters self-generated movements out of the reflexive feedback loop. The versatility and effectiveness of this method are experimentally validated on the ARMAR-III humanoid robot. We first demonstrate that all the stabilization mechanisms (inverse kinematics and reflexes) are complementary. Then, we show that our multimodal method, combining these three modalities with the reafference principle, provides a versatile gaze stabilizer able to handle a large panel of perturbations
    corecore