7,442 research outputs found
Human Like Adaptation of Force and Impedance in Stable and Unstable Tasks
AbstractâThis paper presents a novel human-like learning con-troller to interact with unknown environments. Strictly derived from the minimization of instability, motion error, and effort, the controller compensates for the disturbance in the environment in interaction tasks by adapting feedforward force and impedance. In contrast with conventional learning controllers, the new controller can deal with unstable situations that are typical of tool use and gradually acquire a desired stability margin. Simulations show that this controller is a good model of human motor adaptation. Robotic implementations further demonstrate its capabilities to optimally adapt interaction with dynamic environments and humans in joint torque controlled robots and variable impedance actuators, with-out requiring interaction force sensing. Index TermsâFeedforward force, human motor control, impedance, robotic control. I
Momentum Control of Humanoid Robots with Series Elastic Actuators
Humanoid robots may require a degree of compliance at the joint level for
improving efficiency, shock tolerance, and safe interaction with humans. The
presence of joint elasticity, however, complexifies the design of balancing and
walking controllers. This paper proposes a control framework for extending
momentum based controllers developed for stiff actuators to the case of series
elastic actuators. The key point is to consider the motor velocities as an
intermediate control input, and then apply high-gain control to stabilise the
desired motor velocities achieving momentum control. Simulations carried out on
a model of the robot iCub verify the soundness of the proposed approach
Push recovery with stepping strategy based on time-projection control
In this paper, we present a simple control framework for on-line push
recovery with dynamic stepping properties. Due to relatively heavy legs in our
robot, we need to take swing dynamics into account and thus use a linear model
called 3LP which is composed of three pendulums to simulate swing and torso
dynamics. Based on 3LP equations, we formulate discrete LQR controllers and use
a particular time-projection method to adjust the next footstep location
on-line during the motion continuously. This adjustment, which is found based
on both pelvis and swing foot tracking errors, naturally takes the swing
dynamics into account. Suggested adjustments are added to the Cartesian 3LP
gaits and converted to joint-space trajectories through inverse kinematics.
Fixed and adaptive foot lift strategies also ensure enough ground clearance in
perturbed walking conditions. The proposed structure is robust, yet uses very
simple state estimation and basic position tracking. We rely on the physical
series elastic actuators to absorb impacts while introducing simple laws to
compensate their tracking bias. Extensive experiments demonstrate the
functionality of different control blocks and prove the effectiveness of
time-projection in extreme push recovery scenarios. We also show self-produced
and emergent walking gaits when the robot is subject to continuous dragging
forces. These gaits feature dynamic walking robustness due to relatively soft
springs in the ankles and avoiding any Zero Moment Point (ZMP) control in our
proposed architecture.Comment: 20 pages journal pape
Learning Dynamic Robot-to-Human Object Handover from Human Feedback
Object handover is a basic, but essential capability for robots interacting
with humans in many applications, e.g., caring for the elderly and assisting
workers in manufacturing workshops. It appears deceptively simple, as humans
perform object handover almost flawlessly. The success of humans, however,
belies the complexity of object handover as collaborative physical interaction
between two agents with limited communication. This paper presents a learning
algorithm for dynamic object handover, for example, when a robot hands over
water bottles to marathon runners passing by the water station. We formulate
the problem as contextual policy search, in which the robot learns object
handover by interacting with the human. A key challenge here is to learn the
latent reward of the handover task under noisy human feedback. Preliminary
experiments show that the robot learns to hand over a water bottle naturally
and that it adapts to the dynamics of human motion. One challenge for the
future is to combine the model-free learning algorithm with a model-based
planning approach and enable the robot to adapt over human preferences and
object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics
Research (ISRR) 201
Recommended from our members
Simultaneously encoding movement and sEMG-based stiffness for robotic skill learning
Transferring human stiffness regulation strategies to robots enables them to effectively and efficiently acquire adaptive impedance control policies to deal with uncertainties during the accomplishment of physical contact tasks in an unstructured environment. In this work, we develop such a physical human-robot interaction (pHRI) system which allows robots to learn variable impedance skills from human demonstrations. Specifically, the biological signals, i.e., surface electromyography (sEMG) are utilized for the extraction of human arm stiffness features during the task demonstration. The estimated human arm stiffness is then mapped into a robot impedance controller. The dynamics of both movement and stiffness are simultaneously modeled by using a model combining the hidden semi-Markov model (HSMM) and the Gaussian mixture regression (GMR). More importantly, the correlation between the movement information and the stiffness information is encoded in a systematic manner. This approach enables capturing uncertainties over time and space and allows the robot to satisfy both position and stiffness requirements in a task with modulation of the impedance controller. The experimental study validated the proposed approach
- âŚ