1,541 research outputs found

    Jointly learning trajectory generation and hitting point prediction in robot table tennis

    Get PDF
    This paper proposes a combined learning framework for a table tennis robot. In a typical robot table tennis setup, a single striking point is predicted for the robot on the basis of the ball's initial state. Subsequently, the desired Cartesian racket state and the desired joint states at the striking time are determined. Finally, robot joint trajectories are generated. Instead of predicting a single striking point, we propose to construct a ball trajectory prediction map, which predicts the ball's entire rebound trajectory using the ball's initial state. We construct as well a robot trajectory generation map, which predicts the robot joint movement pattern and the movement duration using the Cartesian racket trajectories without the need of inverse kinematics, where a correlation function is used to adapt these joint movement parameters according to the ball flight trajectory. With joint movement parameters, we can directly generate joint trajectories. Additionally, we introduce a reinforcement learning approach to modify robot joint trajectories such that the robot can return balls well. We validate this new framework in both the simulated and the real robotic systems and illustrate that a seven degree-of-freedom Barrett WAM robot performs well

    Optimal Stroke Learning with Policy Gradient Approach for Robotic Table Tennis

    Full text link
    Learning to play table tennis is a challenging task for robots, as a wide variety of strokes required. Recent advances have shown that deep Reinforcement Learning (RL) is able to successfully learn the optimal actions in a simulated environment. However, the applicability of RL in real scenarios remains limited due to the high exploration effort. In this work, we propose a realistic simulation environment in which multiple models are built for the dynamics of the ball and the kinematics of the robot. Instead of training an end-to-end RL model, a novel policy gradient approach with TD3 backbone is proposed to learn the racket strokes based on the predicted state of the ball at the hitting time. In the experiments, we show that the proposed approach significantly outperforms the existing RL methods in simulation. Furthermore, to cross the domain from simulation to reality, we adopt an efficient retraining method and test it in three real scenarios. The resulting success rate is 98% and the distance error is around 24.9 cm. The total training time is about 1.5 hours

    Robot Composite Learning and the Nunchaku Flipping Challenge

    Full text link
    Advanced motor skills are essential for robots to physically coexist with humans. Much research on robot dynamics and control has achieved success on hyper robot motor capabilities, but mostly through heavily case-specific engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous manner, robot learning from human demonstration (LfD) has achieved great progress, but still has limitations handling dynamic skills and compound actions. In this paper, we present a composite learning scheme which goes beyond LfD and integrates robot learning from human definition, demonstration, and evaluation. The method tackles advanced motor skills that require dynamic time-critical maneuver, complex contact control, and handling partly soft partly rigid objects. We also introduce the "nunchaku flipping challenge", an extreme test that puts hard requirements to all these three aspects. Continued from our previous presentations, this paper introduces the latest update of the composite learning scheme and the physical success of the nunchaku flipping challenge
    corecore