33 research outputs found

    Multi-Objective Optimization for Speed and Stability of a Sony Aibo Gait

    Get PDF
    Locomotion is a fundamental facet of mobile robotics that many higher level aspects rely on. However, this is not a simple problem for legged robots with many degrees of freedom. For this reason, machine learning techniques have been applied to the domain. Although impressive results have been achieved, there remains a fundamental problem with using most machine learning methods. The learning algorithms usually require a large dataset which is prohibitively hard to collect on an actual robot. Further, learning in simulation has had limited success transitioning to the real world. Also, many learning algorithms optimize for a single fitness function, neglecting many of the effects on other parts of the system. As part of the RoboCup 4-legged league, many researchers have worked on increasing the walking/gait speed of Sony AIBO robots. Recently, the effort shifted from developing a quick gait, to developing a gait that also provides a stable sensing platform. However, to date, optimization of both velocity and camera stability has only occurred using a single fitness function that incorporates the two objectives with a weighting that defines the desired tradeoff between them. However, the true nature of this tradeoff is not understood because the pareto front has never been charted, so this a priori decision is uninformed. This project applies the Nondominated Sorting Genetic Algorithm-II (NSGA-II) to find a pareto set of fast, stable gait parameters. This allows a user to select the best tradeoff between balance and speed for a given application. Three fitness functions are defined: one speed measure and two stability measures. A plot of evolved gaits shows a pareto front that indicates speed and stability are indeed conflicting goals. Interestingly, the results also show that tradeoffs also exist between different measures of stability

    Applying reinforcement learning in playing Robosoccer using the AIBO

    Get PDF
    "Robosoccer is a popular test bed for AI programs around the world in which AIBO entertainments robots take part in the middle sized soccer event. These robots need a variety of skills to perform in a semi-real environment like this. The three key challenges are manoeuvrability, image recognition and decision making skills. This research is focussed on the decision making skills ... The work focuses on whether reinforcement learning as a form of semi supervised learning can effectively contribute to the goal keeper's decision making when a shot is taken." -Master of Computing (by research

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning

    Full text link
    We investigate whether Deep Reinforcement Learning (Deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies in dynamic environments. We used Deep RL to train a humanoid robot with 20 actuated joints to play a simplified one-versus-one (1v1) soccer game. We first trained individual skills in isolation and then composed those skills end-to-end in a self-play setting. The resulting policy exhibits robust and dynamic movement skills such as rapid fall recovery, walking, turning, kicking and more; and transitions between them in a smooth, stable, and efficient manner - well beyond what is intuitively expected from the robot. The agents also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots. The full range of behaviors emerged from a small set of simple rewards. Our agents were trained in simulation and transferred to real robots zero-shot. We found that a combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training in simulation enabled good-quality transfer, despite significant unmodeled effects and variations across robot instances. Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training led the robots to learn safe and effective movements while still performing in a dynamic and agile way. Indeed, even though the agents were optimized for scoring, in experiments they walked 156% faster, took 63% less time to get up, and kicked 24% faster than a scripted baseline, while efficiently combining the skills to achieve the longer term objectives. Examples of the emergent behaviors and full 1v1 matches are available on the supplementary website.Comment: Project website: https://sites.google.com/view/op3-socce

    Robot Localization Using Visual Image Mapping

    Get PDF
    One critical step in providing the Air Force the capability to explore unknown environments is for an autonomous agent to be able to determine its location. The calculation of the robot\u27s pose is an optimization problem making use of the robot\u27s internal navigation sensors and data fusion of range sensor readings to find the most likely pose. This data fusion process requires the simultaneous generation of a map which the autonomous vehicle can then use to avoid obstacles, communicate with other agents in the same environment, and locate targets. Our solution entails mounting a Class 1 laser to an ERS-7 AIBO. The laser projects a horizontal line on obstacles in the AIBO camera\u27s field of view. Range readings are determined by capturing and processing multiple image frames, resolving the laser line to the horizon, and extract distance information to each obstacle. This range data is then used in conjunction with mapping a localization software to accurately navigate the AIBO

    Multi-robot coordination using flexible setplays : applications in RoboCup's simulation and middle-size leagues

    Get PDF
    Tese de Doutoramento. Engenharia Informática. Faculdade de Engenharia. Universidade do Porto. 201

    Vision-based legged robot navigation: localisation, local planning, learning

    Get PDF
    The recent advances in legged locomotion control have made legged robots walk up staircases, go deep into underground caves, and walk in the forest. Nevertheless, autonomously achieving this task is still a challenge. Navigating and acomplishing missions in the wild relies not only on robust low-level controllers but also higher-level representations and perceptual systems that are aware of the robot's capabilities. This thesis addresses the navigation problem for legged robots. The contributions are four systems designed to exploit unique characteristics of these platforms, from the sensing setup to their advanced mobility skills over different terrain. The systems address localisation, scene understanding, and local planning, and advance the capabilities of legged robots in challenging environments. The first contribution tackles localisation with multi-camera setups available on legged platforms. It proposes a strategy to actively switch between the cameras and stay localised while operating in a visual teach and repeat context---in spite of transient changes in the environment. The second contribution focuses on local planning, effectively adding a safety layer for robot navigation. The approach uses a local map built on-the-fly to generate efficient vector field representations that enable fast and reactive navigation. The third contribution demonstrates how to improve local planning in natural environments by learning robot-specific traversability from demonstrations. The approach leverages classical and learning-based methods to enable online, onboard traversability learning. These systems are demonstrated via different robot deployments on industrial facilities, underground mines, and parklands. The thesis concludes by presenting a real-world application: an autonomous forest inventory system with legged robots. This last contribution presents a mission planning system for autonomous surveying as well as a data analysis pipeline to extract forestry attributes. The approach was experimentally validated in a field campaign in Finland, evidencing the potential that legged platforms offer for future applications in the wild
    corecore