1,901 research outputs found

    Dynamiska rörelseprimitiver och förstÀrkande inlÀrning för att anpassa en lÀrd fÀrdighet

    Get PDF
    Traditionally robots have been preprogrammed to execute specific tasks. This approach works well in industrial settings where robots have to execute highly accurate movements, such as when welding. However, preprogramming a robot is also expensive, error prone and time consuming due to the fact that every features of the task has to be considered. In some cases, where a robot has to execute complex tasks such as playing the ball-in-a-cup game, preprogramming it might even be impossible due to unknown features of the task. With all this in mind, this thesis examines the possibility of combining a modern learning framework, known as Learning from Demonstrations (LfD), to first teach a robot how to play the ball-in-a-cup game by demonstrating the movement for the robot, and then have the robot to improve this skill by itself with subsequent Reinforcement Learning (RL). The skill the robot has to learn is demonstrated with kinesthetic teaching, modelled as a dynamic movement primitive, and subsequently improved with the RL algorithm Policy Learning by Weighted Exploration with the Returns. Experiments performed on the industrial robot KUKA LWR4+ showed that robots are capable of successfully learning a complex skill such as playing the ball-in-a-cup game.Traditionellt sett har robotar blivit förprogrammerade för att utföra specifika uppgifter. Detta tillvÀgagÄngssÀtt fungerar bra i industriella miljöer var robotar mÄste utföra mycket noggranna rörelser, som att svetsa. Förprogrammering av robotar Àr dock dyrt, felbenÀget och tidskrÀvande eftersom varje aspekt av uppgiften mÄste beaktas. Dessa nackdelar kan till och med göra det omöjligt att förprogrammera en robot att utföra komplexa uppgifter som att spela bollen-i-koppen spelet. Med allt detta i Ätanke undersöker den hÀr avhandlingen möjligheten att kombinera ett modernt ramverktyg, kallat inlÀrning av demonstrationer, för att lÀra en robot hur bollen-i-koppen-spelet ska spelas genom att demonstrera uppgiften för den och sedan ha roboten att sjÀlv förbÀttra sin inlÀrda uppgift genom att anvÀnda förstÀrkande inlÀrning. Uppgiften som roboten mÄste lÀra sig Àr demonstrerad med kinestetisk undervisning, modellerad som dynamiska rörelseprimitiver, och senare förbÀttrad med den förstÀrkande inlÀrningsalgoritmen Policy Learning by Weighted Exploration with the Returns. Experiment utförda pÄ den industriella KUKA LWR4+ roboten visade att robotar Àr kapabla att framgÄngsrikt lÀra sig spela bollen-i-koppen spelet

    Using Monte Carlo Search With Data Aggregation to Improve Robot Soccer Policies

    Full text link
    RoboCup soccer competitions are considered among the most challenging multi-robot adversarial environments, due to their high dynamism and the partial observability of the environment. In this paper we introduce a method based on a combination of Monte Carlo search and data aggregation (MCSDA) to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team. By exploiting a simple representation of the domain, a supervised learning algorithm is trained over an initial collection of data consisting of several simulations of human expert policies. Monte Carlo policy rollouts are then generated and aggregated to previous data to improve the learned policy over multiple epochs and games. The proposed approach has been extensively tested both on a soccer-dedicated simulator and on real robots. Using this method, our learning robot soccer team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents' goals. Together with a better performance, an overall more efficient positioning of the whole team within the field is achieved

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams
    • 

    corecore