135 research outputs found

    Whole-body MPC for highly redundant legged manipulators: experimental evaluation with a 37 DoF dual-arm quadruped

    Full text link
    Recent progress in legged locomotion has rendered quadruped manipulators a promising solution for performing tasks that require both mobility and manipulation (loco-manipulation). In the real world, task specifications and/or environment constraints may require the quadruped manipulator to be equipped with high redundancy as well as whole-body motion coordination capabilities. This work presents an experimental evaluation of a whole-body Model Predictive Control (MPC) framework achieving real-time performance on a dual-arm quadruped platform consisting of 37 actuated joints. To the best of our knowledge this is the legged manipulator with the highest number of joints to be controlled with real-time whole-body MPC so far. The computational efficiency of the MPC while considering the full robot kinematics and the centroidal dynamics model builds upon an open-source DDP-variant solver and a state-of-the-art optimal control problem formulation. Differently from previous works on quadruped manipulators, the MPC is directly interfaced with the low-level joint impedance controllers without the need of designing an instantaneous whole-body controller. The feasibility on the real hardware is showcased using the CENTAURO platform for the challenging task of picking a heavy object from the ground. Dynamic stepping (trotting) is also showcased for first time with this robot. The results highlight the potential of replanning with whole-body information in a predictive control loop.Comment: Accepted at the 2023 IEEE-RAS International Conference on Humanoid Robots (Humanoids 2023), final version with video and acknowledgement

    Real-Time 6DOF Pose Relocalization for Event Cameras with Stacked Spatial LSTM Networks

    Full text link
    We present a new method to relocalize the 6DOF pose of an event camera solely based on the event stream. Our method first creates the event image from a list of events that occurs in a very short time interval, then a Stacked Spatial LSTM Network (SP-LSTM) is used to learn the camera pose. Our SP-LSTM is composed of a CNN to learn deep features from the event images and a stack of LSTM to learn spatial dependencies in the image feature space. We show that the spatial dependency plays an important role in the relocalization task and the SP-LSTM can effectively learn this information. The experimental results on a publicly available dataset show that our approach generalizes well and outperforms recent methods by a substantial margin. Overall, our proposed method reduces by approx. 6 times the position error and 3 times the orientation error compared to the current state of the art. The source code and trained models will be released.Comment: 7 pages, 5 figure

    Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks

    Full text link
    We present a new method to translate videos to commands for robotic manipulation using Deep Recurrent Neural Networks (RNN). Our framework first extracts deep features from the input video frames with a deep Convolutional Neural Networks (CNN). Two RNN layers with an encoder-decoder architecture are then used to encode the visual features and sequentially generate the output words as the command. We demonstrate that the translation accuracy can be improved by allowing a smooth transaction between two RNN layers and using the state-of-the-art feature extractor. The experimental results on our new challenging dataset show that our approach outperforms recent methods by a fair margin. Furthermore, we combine the proposed translation module with the vision and planning system to let a robot perform various manipulation tasks. Finally, we demonstrate the effectiveness of our framework on a full-size humanoid robot WALK-MAN

    Tele-impedance based assistive control for a compliant knee exoskeleton

    Get PDF
    This paper presents a tele-impedance based assistive control scheme for a knee exoskeleton device. The proposed controller captures the user’s intent to generate task-related assistive torques by means of the exoskeleton in different phases of the subject’s normal activity. To do so, a detailed musculoskeletal model of the human knee is developed and experimentally calibrated to best match the user’s kinematic and dynamic behavior. Three dominant antagonistic muscle pairs are used in our model, in which electromyography (EMG) signals are acquired, processed and used for the estimation of the knee joint torque, trajectory and the stiffness trend, in real time. The estimated stiffness trend is then scaled and mapped to a task-related stiffness interval to agree with the desired degree of assistance. The desired stiffness and equilibrium trajectories are then tracked by the exoskeleton’s impedance controller. As a consequence, while minimum muscular activity corresponds to low stiffness, i.e. highly transparent motion, higher co-contractions result in a stiffer joint and a greater level of assistance. To evaluate the robustness of the proposed technique, a study of the dynamics of the human–exoskeleton system is conducted, while the stability in the steady state and transient condition is investigated. In addition, experimental results of standing-up and sitting-down tasks are demonstrated to further investigate the capabilities of the controller. The results indicate that the compliant knee exoskeleton, incorporating the proposed tele-impedance controller, can effectively generate assistive actions that are volitionally and intuitively controlled by the user’s muscle activity

    A multi-DOF robotic exoskeleton interface for hand motion assistance

    Get PDF
    This paper outlines the design and development of a robotic exoskeleton based rehabilitation system. A portable direct-driven optimized hand exoskeleton system has been proposed. The optimization procedure primarily based on matching the exoskeleton and finger workspaces guided the system design. The selection of actuators for the proposed system has emerged as a result of experiments with users of different hand sizes. Using commercial sensors, various hand parameters, e.g. maximum and average force levels have been measured. The results of these experiments have been mapped directly to the mechanical design of the system. An under-actuated optimum mechanism has been analysed followed by the design and realization of the first prototype. The system provides both position and force feedback sensory information which can improve the outcomes of a professional rehabilitation exercise. © 2011 IEEE
    • …
    corecore