23 research outputs found

    A Model-Based Approach to Robot Joint Control

    No full text
    Despite eorts to design precise motor controllers, robot joints do not always move exactly as desired. This paper introduces a general model-based method for improving the accuracy of joint control

    Autonomous sensor and action model learning for mobile robots

    No full text
    textAutonomous mobile robots have the potential to be extremely beneficial to society due to their ability to perform tasks that are difficult or dangerous for humans. These robots will necessarily interact with their environment through the two fundamental processes of acting and sensing. Robots learn about the state of the world around them through their sensations, and they influence that state through their actions. However, in order to interact with their environment effectively, these robots must have accurate models of their sensors and actions: knowledge of what their sensations say about the state of the world and how their actions affect that state. A mobile robot’s action and sensor models are typically tuned manually, a brittle and laborious process. The robot’s actions and sensors may change either over time from wear or because of a novel environment’s terrain or lighting. It is therefore valuable for the robot to be able to autonomously learn these models. This dissertation presents a methodology that enables mobile robots to learn their action and sensor models starting without an accurate estimate of either model. This methodology is instantiated in three robotic scenarios. First, an algorithm is presented that enables an autonomous agent to learn its action and sensor models in a class of one-dimensional settings. Experimental tests are performed on a four-legged robot, the Sony Aibo ERS-7, walking forward and backward at different speeds while facing a fixed landmark. Second, a probabilistically motivated model learning algorithm is presented that operates on the same robot walking in two dimensions with arbitrary combinations of forward, sideways, and turning velocities. Finally, an algorithm is presented to learn the action and sensor models of a very different mobile robot, an autonomous car.Computer Science

    A model-based approach to robot joint control

    No full text
    Abstract. Despite efforts to design precise motor controllers, robot joints do not always move exactly as desired. This paper introduces a general model-based method for improving the accuracy of joint control. First, a model that predicts the effects of joint requests is built based on empirical data. Then this model is approximately inverted to determine the control requests that will most closely lead to the desired movements. We implement and validate this approach on a popular, commercially available robot, the Sony Aibo ERS-210A. Keywords: Sensor-Motor Control; Mobile Robots and Humanoids

    Towards autonomous sensor and actuator model induction on a mobile robot

    No full text
    1 This article presents a novel methodology for a robot to autonomously induce models of its actions and sensors called asami (Autonomous Sensor and Actuator Model Induction). While previous approaches to model learning rely on an independent source of training data, we show how a robot can induce action and sensor models without any well-calibrated feedback. Specif-ically, the only inputs to the asami learning process are the data the robot would naturally have access to: its raw sensations and knowledge of its own action selections. From the per-spective of developmental robotics, our robot’s goal is to obtain self-consistent internal models, rather than to perform any externally defined tasks. Furthermore, the target function of each model-learning process comes from within the system, namely the most current version of an-other internal system model. Concretely realizing this model-learning methodology presents a number of challenges, and we introduce a broad class of settings in which solutions to these challenges are presented. asami is fully implemented and tested, and empirical results validate our approach in a robotic testbed domain using a Sony Aibo ERS-7 robot

    Simultaneous calibration of action and sensor models on a mobile robot

    No full text
    Abstract — This paper presents a technique for the Simultaneou

    A comparison of two approaches for vision and self-localization on a mobile robot

    No full text
    Abstract — This paper considers two approaches to the problem of vision and self-localization on a mobile robot. In the first approach, the perceptual processing is primarily bottom-up, with visual object recognition entirely preceding localization. In the second, significant top-down information is incorporated, with vision and localization being intertwined. That is, the processing of vision is highly dependent on the robot’s estimate of its location. The two approaches are implemented and tested on a Sony Aibo ERS-7 robot, localizing as it walks through a color-coded test-bed domain. This paper’s contributions are an exposition of two different approaches to vision and localization on a mobile robot, an empirical comparison of the two methods, and a discussion of the relative advantages of each method. I
    corecore