86 research outputs found

    Walking Stabilization Using Step Timing and Location Adjustment on the Humanoid Robot, Atlas

    Full text link
    While humans are highly capable of recovering from external disturbances and uncertainties that result in large tracking errors, humanoid robots have yet to reliably mimic this level of robustness. Essential to this is the ability to combine traditional "ankle strategy" balancing with step timing and location adjustment techniques. In doing so, the robot is able to step quickly to the necessary location to continue walking. In this work, we present both a new swing speed up algorithm to adjust the step timing, allowing the robot to set the foot down more quickly to recover from errors in the direction of the current capture point dynamics, and a new algorithm to adjust the desired footstep, expanding the base of support to utilize the center of pressure (CoP)-based ankle strategy for balance. We then utilize the desired centroidal moment pivot (CMP) to calculate the momentum rate of change for our inverse-dynamics based whole-body controller. We present simulation and experimental results using this work, and discuss performance limitations and potential improvements

    Straight-Leg Walking Through Underconstrained Whole-Body Control

    Full text link
    We present an approach for achieving a natural, efficient gait on bipedal robots using straightened legs and toe-off. Our algorithm avoids complex height planning by allowing a whole-body controller to determine the straightest possible leg configuration at run-time. The controller solutions are biased towards a straight leg configuration by projecting leg joint angle objectives into the null-space of the other quadratic program motion objectives. To allow the legs to remain straight throughout the gait, toe-off was utilized to increase the kinematic reachability of the legs. The toe-off motion is achieved through underconstraining the foot position, allowing it to emerge naturally. We applied this approach of under-specifying the motion objectives to the Atlas humanoid, allowing it to walk over a variety of terrain. We present both experimental and simulation results and discuss performance limitations and potential improvements.Comment: Submitted to 2018 IEEE International Conference on Robotics and Automatio

    FUNCTIONAL ELECTRICAL STIMULATION OF A QUADRICEPS MUSCLE USING A NEURAL-NETWORK ADAPTIVE CONTROL APPROACH

    Get PDF
    ABSTRACT Functional electrical stimulation (FES) has been used to facilitate persons with paralysis in restoring their motor functions. In particular, FES-based devices apply electrical current pulses to stimulate the intact peripheral nerves to produce artificial contraction of paralyzed muscles. The aim of this work is to develop a model reference adaptive controller of the shank movement via FES. A mathematical model, which describes the relationship between the stimulation pulsewidth and the active joint torque produced by the stimulated muscles in non-isometric conditions, is adopted. The direct adaptive control strategy is used to address those nonlinearities which are linearly parameterized (LP). Since the torque due to the joint stiffness component is non-LP, a neural network (NN) is applied to approximate it. A backstepping approach is developed to guarantee the stability of the closed loop system. In order to address the saturation of the control input, a model reference adaptive control approach is used to provide good tracking performance without jeopardizing the closed-loop stability. Simulation results are provided to validate the proposed work

    Real-Time Model-Free Deep Reinforcement Learning for Force Control of a Series Elastic Actuator

    Full text link
    Many state-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation. Model-free PID control methods are more prone to instability due to nonlinearities in the SEA where cascaded model-based robust controllers can remove these effects to achieve stable force control. However, these model-based methods require detailed investigations to characterize the system accurately. Deep reinforcement learning (DRL) has proved to be an effective model-free method for continuous control tasks, where few works deal with hardware learning. This paper describes the training process of a DRL policy on hardware of an SEA pendulum system for tracking force control trajectories from 0.05 - 0.35 Hz at 50 N amplitude using the Proximal Policy Optimization (PPO) algorithm. Safety mechanisms are developed and utilized for training the policy for 12 hours (overnight) without an operator present within the full 21 hours training period. The tracking performance is evaluated showing improvements of 2525 N in mean absolute error when comparing the first 18 min. of training to the full 21 hours for a 50 N amplitude, 0.1 Hz sinusoid desired force trajectory. Finally, the DRL policy exhibits better tracking and stability margins when compared to a model-free PID controller for a 50 N chirp force trajectory.Comment: 8 pages, 5 figures, submitted to IROS 202

    Hierarchical and Safe Motion Control for Cooperative Locomotion of Robotic Guide Dogs and Humans: A Hybrid Systems Approach

    Get PDF
    This paper presents a hierarchical control strategy based on hybrid systems theory, nonlinear control, and safety-critical systems to enable cooperative locomotion of robotic guide dogs and visually impaired people. We address high-dimensional and complex hybrid dynamical models that represent collaborative locomotion. At the high level of the control scheme, local and nonlinear baseline controllers, based on the virtual constraints approach, are designed to induce exponentially stable dynamic gaits. The baseline controller for the leash is assumed to be a nonlinear controller that keeps the human in a safe distance from the dog while following it. At the lower level, a real-time quadratic programming (QP) is solved for modifying the baseline controllers of the robot as well as the leash to avoid obstacles. In particular, the QP framework is set up based on control barrier functions (CBFs) to compute optimal control inputs that guarantee safety while being close to the baseline controllers. The stability of the complex periodic gaits is investigated through the Poincare return map. To demonstrate the power of the analytical foundation, the control algorithms are transferred into an extensive numerical simulation of a complex model that represents cooperative locomotion of a quadrupedal robot, referred to as Vision 60, and a human model. The complex model has 16 continuous-time domains with 60 state variables and 20 control inputs

    Model Pembelajaran Instruction, Doing, Dan Evaluating (Mpide) Dengan Video Kejadian Fisika Dalam Pembelajaran Fisika Di SMA

    Full text link
    This research examines the “Model Pembelajaran Instruction, Doing, dan Evaluating (MPIDE)” with Physics Phenomenon Video in Physics Instruction at SMA. The research\u27s purpose are to determine the students activities, the effectiveness of model, and the students learning achievement retention. This research is a research action that implicated by one group pretest and posttest design for testing. This research conducted on SMA of Class XI with the techniques of data collection is the observation, interviews, and tests. Data analysis techniques are percentage, continuesly its described. The results of the students activities , the effectiveness of model, and the students learning achievement retention can improve each cycle. The average of the students activities from cycle one to cycle two is 69,17% to 73,33% with active activity category. The average of the effectiveness of model from cycle one to cycle two is 0.68 with anough effective category to 0.75 with effective category. The average of students learning achievement retention from cycle one to cycle two is 92.56% to 93.19% with high category. The research can be concluded that the model can improve the students activities, the effectiveness of model, and the students learning achievement retention when the model completed by dubbing on video
    • …
    corecore