165 research outputs found

    Making Humanoid Robots More Acceptable Based on the Study of Robot Characters in Animation

    Get PDF
    In this paper we take an approach in Humanoid Robots are not considered as robots who resembles human beings in a realistic way of appearance and act but as robots who act and react like human that make them more believable by people. Regarding this approach we will study robot characters in animation movies and discuss what makes some of them to be accepted just like a moving body and what makes some other robot characters to be believable as a living human. The goal of this paper is to create a rule set that describes friendly, socially acceptable, kind, cute... robots and in this study we will review example robots in popular animated movies. The extracted rules and features can be used for making real robots more acceptable

    Nonprehensile Dynamic Manipulation: A Survey

    Get PDF
    Nonprehensile dynamic manipulation can be reason- ably considered as the most complex manipulation task. It might be argued that such a task is still rather far from being fully solved and applied in robotics. This survey tries to collect the results reached so far by the research community about planning and control in the nonprehensile dynamic manipulation domain. A discussion about current open issues is addressed as well

    Reactive Motions In A Fully Autonomous CRS Catalyst 5 Robotic Arm Based On RGBD Data

    Get PDF
    This study proposes a method to perform velocity estimation using motion blur in a single image frame along x and y axes in the camera coordinate system and intercept a moving object with a robotic arm. It will be shown that velocity estimation in a single image frame improves the system\u27s performance. The majority of previous studies in this area require at least two image frames to measure the target\u27s velocity. In addition, they mostly employ specialized equipments which are able to generate high torques and accelerations. The setup consists of a 5 degree of freedom robotic arm and a Kinect camera. The RGBD (Red, Green, Blue and Depth) camera provides the RGB and depth information which are used to detect the position of the target. As the object is moving within a single image frame, the image contains motion blur. To recognize and differentiate the object from blurred area, the image intensity profiles are studied. Accordingly, the method determines the blur parameters based on the changes in the intensity profile. The aforementioned blur parameters are the length of the object and the length of the partial blur. Based on motion blur, the velocities along x and y camera coordinate axes are estimated. However, as the depth frame cannot record motion blur, the velocity along axis in the camera coordinate frame is initially unknown. The vectors of position and velocity are transformed into world coordinate frame and subsequently, the prospective position of the object, after a predefined time interval, is predicted. In order to intercept, the end-effector of the robotic arm must reach this predicted position within the time interval as well. For the end-effector to reach the predicted position within the predefined time interval, the robot\u27s joint angles and accelerations are determined through inverse kinematic methods. Then the robotic arm starts its motion. Once the second depth frame is obtained, the object\u27s velocity along z axis can be calculated as well. Accordingly, the predicted position of the object is recalculated, and the motion of the manipulator is modified. The proposed method is compared with existing methods which need at least two image frames to estimate the velocity of the target. It is shown that under identical kinematic conditions, the functionality of the system is improved by times for our setup. In addition, the experiment is repeated for times and the velocity data is recorded. According to the experimental results, there are two major limitations in our system and setup. The system cannot determine the velocity along z in the camera coordinate system from the initial image frame. Consequently, if the object travels faster along this axis, it becomes more susceptible to failure. In addition, our manipulator is an unspecialized equipment which is not designed for producing high torques and accelerations. Accordingly, the task becomes more challenging. The main cause of error in the experiments was operator\u27s. It is necessary to have the object pass through the working volume of the robot. Besides, the object must be still inside the working volume after the predefined time interval. It is possible that the operator throw the object within the designated working volume, but it leaves it earlier than the specified time interval

    A physics-based Juggling Simulation using Reinforcement Learning

    Get PDF
    학위논문 (석사)-- 서울대학교 대학원 : 공과대학 컴퓨터공학부, 2019. 2. Lee, Jehee.Juggling is a physical skill which consists in keeping one or several objects in continuous motion in the air by tossing and catching it. Jugglers need a high dexterity to control their throws and catches which require speed, accuracy and synchronization. Also, the more balls we juggle with, the more those qualities have to be strong to achieve this performance. This thesis follows a previous project made by Lee et al.[1] where they performed juggling to demonstrate their method. In this work, we want to generalize the juggling skill and create a real time simulation by using machine learning. A reason to choose this skill is that Studying the ability to toss and catch balls and rings provides insight into human coordination, robotics and mathematics as written in the article Science of Juggling[2]. That is why juggling can be a good challenge for realistic physical based simulation to improve our knowledge on these fields, but also to help jugglers to evaluate the feasibility of their tricks. In order to do it, we have to understand all the different notations used in juggling and to apply the mathematical theory of juggling to reproduce it. In this thesis, we find an approach to learn juggling. We first break the need of synchronization of both hands by dividing our character in two. Then we divide the juggling into two subtasks catching and throwing a ball, where we present a deep reinforcement learning method for both of them. Finally, we use these tasks sequentially on both sides of the body to recreate the all juggling process. As a result, our character learns to catch all balls randomly thrown to him and to throw it at the velocity wanted. After combination of both subtasks, our juggler is able to react accurately and with enough speed and power to juggle up to 6 balls, even with external forces applied on it.I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 II. Juggling theory . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 Notation and Parameters . . . . . . . . . . . . . . . . . . . 4 2.2 Juggling patterns . . . . . . . . . . . . . . . . . . . . . . . 6 III. Approach to learn juggling . . . . . . . . . . . . . . . . . . . 9 3.1 Juggling sequence . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Reinforcement learning . . . . . . . . . . . . . . . . . . . . 10 3.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.2 Advantages . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Rewards for Juggling . . . . . . . . . . . . . . . . . . . . . 14 3.3.1 Catching . . . . . . . . . . . . . . . . . . . . . . . 14 3.3.2 Throwing . . . . . . . . . . . . . . . . . . . . . . . 15 IV. Experiments and Results . . . . . . . . . . . . . . . . . . . . 17 4.1 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.1.1 States . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.1.2 Actions . . . . . . . . . . . . . . . . . . . . . . . . 19 4.1.3 Environment of our Simulation . . . . . . . . . . . . 20 4.2 Subtasks results . . . . . . . . . . . . . . . . . . . . . . . . 21 iii 4.2.1 Throwing . . . . . . . . . . . . . . . . . . . . . . . 21 4.2.2 Catching . . . . . . . . . . . . . . . . . . . . . . . 22 4.3 Performing juggling . . . . . . . . . . . . . . . . . . . . . . 25 4.3.1 Results . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3.2 Add new ball while juggling . . . . . . . . . . . . . 26 V. Toward a 3D juggling . . . . . . . . . . . . . . . . . . . . . . 28 5.1 Catching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.2 Throwing . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 VI. Discussion and Conclusion . . . . . . . . . . . . . . . . . . . 33 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . 37Maste

    Dynamic Bat-Control of a Redundant Ball Playing Robot

    Get PDF
    This thesis shows a control algorithm for coping with a ball batting task for an entertainment robot. The robot is a three jointed robot with a redundant degree of freedom and its name is Doggy . Doggy because of its dog-like costume. Design, mechanics and electronics were developed by us. DC-motors control the tooth belt driven joints, resulting in elasticities between the motor and link. Redundancy and elasticity have to be taken into account by our developed controller and are demanding control tasks. In this thesis we show the structure of the ball playing robot and how this structure can be described as a model. We distinguish two models: One model that includes a flexible bearing, the other does not. Both models are calibrated using the toolkit Sparse Least Squares on Manifolds (SLOM) - i.e. the parameters for the model are determined. Both calibrated models are compared to measurements of the real system. The model with the flexible bearing is used to implement a state estimator - based on a Kalman filter - on a microcontroller. This ensures real time estimation of the robot states. The estimated states are also compared with the measurements and are assessed. The estimated states represent the measurements well. In the core of this work we develop a Task Level Optimal Controller (TLOC), a model-predictive optimal controller based on the principles of a Linear Quadratic Regulator (LQR). We aim to play a ball back to an opponent precisely. We show how this task of playing a ball at a desired time with a desired velocity at a desired position can be embedded into the LQR principle. We use cost functions for the task description. In simulations, we show the functionality of the control concept, which consists of a linear part (on a microcontroller) and a nonlinear part (PC software). The linear part uses feedback gains which are calculated by the nonlinear part. The concept of the ball batting controller with precalculated feedback gains is evaluated on the robot. This shows successful batting motions. The entertainment aspect has been tested on the Open Campus Day at the University of Bremen and is summarized here shortly. Likewise, a jointly developed audience interaction by recognition of distinctive sounds is summarized herein. In this thesis we answer the question, if it is possible to define a rebound task for our robot within a controller and show the necessary steps for this

    High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards

    Get PDF
    Robots that can learn in the physical world will be important to enable robots to escape their stiff and pre-programmed movements. For dynamic high-acceleration tasks, such as juggling, learning in the real-world is particularly challenging as one must push the limits of the robot and its actuation without harming the system, amplifying the necessity of sample efficiency and safety for robot learning algorithms. In contrast to prior work which mainly focuses on the learning algorithm, we propose a learning system, that directly incorporates these requirements in the design of the policy representation, initialization, and optimization. We demonstrate that this system enables the high-speed Barrett WAM manipulator to learn juggling two balls from 56 minutes of experience with a binary reward signal and finally juggles continuously for up to 33 minutes or about 4500 repeated catches. The videos documenting the learning process and the evaluation can be found at https://sites.google.com/view/jugglingbo

    Dynamic Handover: Throw and Catch with Bimanual Hands

    Full text link
    Humans throw and catch objects all the time. However, such a seemingly common skill introduces a lot of challenges for robots to achieve: The robots need to operate such dynamic actions at high-speed, collaborate precisely, and interact with diverse objects. In this paper, we design a system with two multi-finger hands attached to robot arms to solve this problem. We train our system using Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple novel algorithm designs including learning a trajectory prediction model for the object. Such a model can help the robot catcher has a real-time estimation of where the object will be heading, and then react accordingly. We conduct our experiments with multiple objects in the real-world system, and show significant improvements over multiple baselines. Our project page is available at \url{https://binghao-huang.github.io/dynamic_handover/}.Comment: Accepted at CoRL 2023. https://binghao-huang.github.io/dynamic_handover

    Motion planning and control methods for nonprehensile manipulation and multi-contact locomotion tasks

    Get PDF
    Many existing works in the robotic literature deal with the problem of nonprehensile dynamic manipulation. However, a unified control framework does not exist so far. One of the ambitious goals of this Thesis is to contribute to identify planning and control frameworks solving classes of nonprehensile dynamic manipulation tasks, dealing with the non linearity of their dynamic models and, consequently, with the inherited design complexity. Besides, while passing through a number of connections between dynamic nonprehensile manipulation and legged locomotion, the Thesis presents novel methods for generating walking motions in multi-contact situations
    corecore