637 research outputs found

    Impact-Aware Task-Space Quadratic-Programming Control

    Full text link
    Generating on-purpose impacts with rigid robots is challenging as they may lead to severe hardware failures due to abrupt changes in the velocities and torques. Without dedicated hardware and controllers, robots typically operate at a near-zero velocity in the vicinity of contacts. We assume knowing how much of impact the hardware can absorb and focus solely on the controller aspects. The novelty of our approach is twofold: (i) it uses the task-space inverse dynamics formalism that we extend by seamlessly integrating impact tasks; (ii) it does not require separate models with switches or a reset map to operate the robot undergoing impact tasks. Our main idea lies in integrating post-impact states prediction and impact-aware inequality constraints as part of our existing general-purpose whole-body controller. To achieve such prediction, we formulate task-space impacts and its spreading along the kinematic tree of a floating-base robot with subsequent joint velocity and torque jumps. As a result, the feasible solution set accounts for various constraints due to expected impacts. In a multi-contact situation of under-actuated legged robots subject to multiple impacts, we also enforce standing stability margins. By design, our controller does not require precise knowledge of impact location and timing. We assessed our formalism with the humanoid robot HRP-4, generating maximum contact velocities, neither breaking established contacts nor damaging the hardware

    Bipedal Hopping: Reduced-order Model Embedding via Optimization-based Control

    Get PDF
    This paper presents the design and validation of controlling hopping on the 3D bipedal robot Cassie. A spring-mass model is identified from the kinematics and compliance of the robot. The spring stiffness and damping are encapsulated by the leg length, thus actuating the leg length can create and control hopping behaviors. Trajectory optimization via direct collocation is performed on the spring-mass model to plan jumping and landing motions. The leg length trajectories are utilized as desired outputs to synthesize a control Lyapunov function based quadratic program (CLF-QP). Centroidal angular momentum, taking as an addition output in the CLF-QP, is also stabilized in the jumping phase to prevent whole body rotation in the underactuated flight phase. The solution to the CLF-QP is a nonlinear feedback control law that achieves dynamic jumping behaviors on bipedal robots with compliance. The framework presented in this paper is verified experimentally on the bipedal robot Cassie.Comment: 8 pages, 7 figures, accepted by IROS 201

    Orientation-Aware Model Predictive Control with Footstep Adaptation for Dynamic Humanoid Walking

    Full text link
    This paper proposes a novel orientation-aware model predictive control (MPC) for dynamic humanoid walking that can plan footstep locations online. Instead of a point-mass model, this work uses the augmented single rigid body model (aSRBM) to enable the MPC to leverage orientation dynamics and stepping strategy within a unified optimization framework. With the footstep location as part of the decision variables in the aSRBM, the MPC can reason about stepping within the kinematic constraints. A task-space controller (TSC) tracks the body pose and swing leg references output from the MPC, while exploiting the full-order dynamics of the humanoid. The proposed control framework is suitable for real-time applications since both MPC and TSC are formulated as quadratic programs. Simulation investigations show that the orientation-aware MPC-based framework is more robust against external torque disturbance compared to state-of-the-art controllers using the point mass model, especially when the torso undergoes large angular excursion. The same control framework can also enable the MIT Humanoid to overcome uneven terrains, such as traversing a wave field

    Dynamic Walking: Toward Agile and Efficient Bipedal Robots

    Get PDF
    Dynamic walking on bipedal robots has evolved from an idea in science fiction to a practical reality. This is due to continued progress in three key areas: a mathematical understanding of locomotion, the computational ability to encode this mathematics through optimization, and the hardware capable of realizing this understanding in practice. In this context, this review article outlines the end-to-end process of methods which have proven effective in the literature for achieving dynamic walking on bipedal robots. We begin by introducing mathematical models of locomotion, from reduced order models that capture essential walking behaviors to hybrid dynamical systems that encode the full order continuous dynamics along with discrete footstrike dynamics. These models form the basis for gait generation via (nonlinear) optimization problems. Finally, models and their generated gaits merge in the context of real-time control, wherein walking behaviors are translated to hardware. The concepts presented are illustrated throughout in simulation, and experimental instantiation on multiple walking platforms are highlighted to demonstrate the ability to realize dynamic walking on bipedal robots that is agile and efficient

    Bridging Vision and Dynamic Legged Locomotion

    Get PDF
    Legged robots have demonstrated remarkable advances regarding robustness and versatility in the past decades. The questions that need to be addressed in this field are increasingly focusing on reasoning about the environment and autonomy rather than locomotion only. To answer some of these questions visual information is essential. If a robot has information about the terrain it can plan and take preventive actions against potential risks. However, building a model of the terrain is often computationally costly, mainly because of the dense nature of visual data. On top of the mapping problem, robots need feasible body trajectories and contact sequences to traverse the terrain safely, which may also require heavy computations. This computational cost has limited the use of visual feedback to contexts that guarantee (quasi-) static stability, or resort to planning schemes where contact sequences and body trajectories are computed before starting to execute motions. In this thesis we propose a set of algorithms that reduces the gap between visual processing and dynamic locomotion. We use machine learning to speed up visual data processing and model predictive control to achieve locomotion robustness. In particular, we devise a novel foothold adaptation strategy that uses a map of the terrain built from on-board vision sensors. This map is sent to a foothold classifier based on a convolutional neural network that allows the robot to adjust the landing position of the feet in a fast and continuous fashion. We then use the convolutional neural network-based classifier to provide safe future contact sequences to a model predictive controller that optimizes target ground reaction forces in order to track a desired center of mass trajectory. We perform simulations and experiments on the hydraulic quadruped robots HyQ and HyQReal. For all experiments the contact sequences, the foothold adaptations, the control inputs and the map are computed and processed entirely on-board. The various tests show that the robot is able to leverage the visual terrain information to handle complex scenarios in a safe, robust and reliable manner
    corecore