198 research outputs found

    Protosymbols that integrate recognition and response

    Get PDF
    We explore two controversial hypotheses through robotic implementation: (1) Processes involved in recognition and response are tightly coupled both in their operation and epigenesis; and (2) processes involved in symbol emergence should respect the integrity of recognition and response while exploiting the periodicity of biological motion. To that end, this paper proposes a method of recognizing and generating motion patterns based on nonlinear principal component neural networks that are constrained to model both periodic and transitional movements. The method is evaluated by an examination of its ability to segment and generalize different kinds of soccer playing activity during a RoboCup match

    Evolution of Prehension Ability in an Anthropomorphic Neurorobotic Arm

    Get PDF
    In this paper we show how a simulated anthropomorphic robotic arm controlled by an artificial neural network can develop effective reaching and grasping behaviour through a trial and error process in which the free parameters encode the control rules which regulate the fine-grained interaction between the robot and the environment and variations of the free parameters are retained or discarded on the basis of their effects at the level of the global behaviour exhibited by the robot situated in the environment. The obtained results demonstrate how the proposed methodology allows the robot to produce effective behaviours thanks to its ability to exploit the morphological properties of the robotโ€™s body (i.e. its anthropomorphic shape, the elastic properties of its muscle-like actuators, and the compliance of its actuated joints) and the properties which arise from the physical interaction between the robot and the environment mediated by appropriate control rules

    Body models in humans, animals, and robots: mechanisms and plasticity

    Full text link
    Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent - yet, as is so often the case, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed. In the biological realm, evidence has been accumulated by diverse disciplines giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables to control the machine. In this article I compare the character of body representations in biology with their robotic counterparts and relate that to the differences in performance that we observe. I put forth a number of axes regarding the nature of such body models: fixed vs. plastic, amodal vs. modal, explicit vs. implicit, serial vs. parallel, modular vs. holistic, and centralized vs. distributed. An interesting trend emerges: on many of the axes, there is a sequence from robot body models, over body image, body schema, to the body representation in lower animals like the octopus. In some sense, robots have a lot in common with Ian Waterman - "the man who lost his body" - in that they rely on an explicit, veridical body model (body image taken to the extreme) and lack any implicit, multimodal representation (like the body schema) of their bodies. I will then detail how robots can inform the biological sciences dealing with body representations and finally, I will study which of the features of the "body in the brain" should be transferred to robots, giving rise to more adaptive and resilient, self-calibrating machines.Comment: 27 pages, 8 figure

    ์™ธ๋ž€ ๋ฐ ํ† ํฌ ๋Œ€์—ญํญ ์ œํ•œ์„ ๊ณ ๋ คํ•œ ํ† ํฌ ๊ธฐ๋ฐ˜์˜ ์ž‘์—… ๊ณต๊ฐ„ ์ œ์–ด

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(์ง€๋Šฅํ˜•์œตํ•ฉ์‹œ์Šคํ…œ์ „๊ณต), 2021.8. ๋ฐ•์žฌํฅ.The thesis aims to improve the control performance of the torque-based operational space controller under disturbance and torque bandwidth limitation. Torque-based robot controllers command the desired torque as an input signal to the actuator. Since the torque is at force-level, the torque-controlled robot is more compliant to external forces from the environment or people than the position-controlled robot. Therefore, it can be used effectively for the tasks involving contact such as legged locomotion or human-robot interaction. Operational space control strengthens this advantage for redundant robots due to the inherent compliance in the null space of given tasks. However, high-level torque-based controllers have not been widely used for transitional robots such as industrial manipulators due to the low performance of precise control. One of the reasons is the uncertainty or disturbance in the kinematic and dynamic properties of the robot model. It leads to the inaccurate computation of the desired torque, deteriorating the control stability and performance. To estimate and compensate the disturbance using only proprioceptive sensors, the disturbance observer has been developed using inverse dynamics. It requires the joint acceleration information, which is noisy due to the numerical error in the second-order derivative of the joint position. In this work, a contact-consistent disturbance observer for a floating-base robot is proposed. The method uses the fixed contact position of the supporting foot as the kinematic constraints to estimate the joint acceleration error. It is incorporated into the dynamics model to reduce its effect on the disturbance torque solution, by which the observer becomes less dependent on the low-pass filter design. Another reason for the low performance of precise control is torque bandwidth limitation. Torque bandwidth is determined by the relationship between the input torque commanded to the actuator and the torque actually transmitted into the link. It can be regulated by various factors such as inner torque feedback loop, actuator dynamics, and joint elasticity, which deteriorates the control stability and performance. Operational space control is especially prone to this problem, since the limited bandwidth of a single actuator can reduce the performance of all related tasks simultaneously. In this work, an intuitive way to penalize low performance actuators is proposed for the operational space controller. The basic concept is to add joint torques only to high performance actuators recursively, which has the physical meaning of the joint-weighted torque solution considering each actuator performance. By penalizing the low performance actuators, the torque transmission error is reduced and the task performance is significantly improved. In addition, the joint trajectory is not required, which allows compliance in redundancy. The results of the thesis were verified by experiments using the 12-DOF biped robot DYROS-RED and the 7-DOF robot manipulator Franka Emika Panda.๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์€ ์™ธ๋ž€๊ณผ ํ† ํฌ ๋Œ€์—ญํญ ์ œํ•œ์ด ์กด์žฌํ•  ๋•Œ ํ† ํฌ ๊ธฐ๋ฐ˜ ์ž‘์—… ๊ณต๊ฐ„ ์ œ์–ด๊ธฐ์˜ ์ œ์–ด ์„ฑ๋Šฅ์„ ๋†’์ด๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ํ† ํฌ ๊ธฐ๋ฐ˜์˜ ๋กœ๋ด‡ ์ œ์–ด๊ธฐ๋Š” ๋ชฉํ‘œ ํ† ํฌ๋ฅผ ์ž…๋ ฅ ์‹ ํ˜ธ๋กœ์„œ ๊ตฌ๋™๊ธฐ์— ์ „๋‹ฌํ•œ๋‹ค. ํ† ํฌ๋Š” ํž˜ ๋ ˆ๋ฒจ์ด๊ธฐ ๋•Œ๋ฌธ์—, ํ† ํฌ ์ œ์–ด ๋กœ๋ด‡์€ ์œ„์น˜ ์ œ์–ด ๋กœ๋ด‡์— ๋น„ํ•ด ์™ธ๋ถ€ ํ™˜๊ฒฝ์ด๋‚˜ ์‚ฌ๋žŒ์œผ๋กœ๋ถ€ํ„ฐ ๊ฐ€ํ•ด์ง€๋Š” ์™ธ๋ ฅ์— ๋” ์œ ์—ฐํ•˜๊ฒŒ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ ํ† ํฌ ์ œ์–ด๋Š” ๋ณดํ–‰์ด๋‚˜ ์ธ๊ฐ„-๋กœ๋ด‡ ์ƒํ˜ธ์ž‘์šฉ๊ณผ ๊ฐ™์€ ์ ‘์ด‰์„ ํฌํ•จํ•˜๋Š” ์ž‘์—…์„ ์œ„ํ•ด ํšจ๊ณผ์ ์œผ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค. ์ž‘์—… ๊ณต๊ฐ„ ์ œ์–ด๋Š” ์ด๋Ÿฌํ•œ ํ† ํฌ ์ œ์–ด์˜ ์žฅ์ ์„ ๋” ๊ฐ•ํ™”์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š”๋ฐ, ๋กœ๋ด‡์ด ์—ฌ์œ  ์ž์œ ๋„๊ฐ€ ์žˆ์„ ๋•Œ ์ž‘์—…์˜ ์˜๊ณต๊ฐ„์—์„œ ์กด์žฌํ•˜๋Š” ๋ชจ์…˜๋“ค์ด ๋‚ด์žฌ์ ์œผ๋กœ ์œ ์—ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์žฅ์ ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ํ† ํฌ ๊ธฐ๋ฐ˜์˜ ๋กœ๋ด‡ ์ œ์–ด๊ธฐ๋Š” ์ •๋ฐ€ ์ œ์–ด ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๊ธฐ ๋•Œ๋ฌธ์— ์‚ฐ์—…์šฉ ๋กœ๋ด‡ ํŒ”๊ณผ ๊ฐ™์€ ์ „ํ†ต์ ์ธ ๋กœ๋ด‡์—๋Š” ๋„๋ฆฌ ์‚ฌ์šฉ๋˜์ง€ ๋ชปํ–ˆ๋‹ค. ๊ทธ ์ด์œ  ์ค‘ ํ•œ ๊ฐ€์ง€๋Š” ๋กœ๋ด‡ ๋ชจ๋ธ์˜ ๊ธฐ๊ตฌํ•™ ๋ฐ ๋™์—ญํ•™ ๋ฌผ์„ฑ์น˜์— ์กด์žฌํ•˜๋Š” ์™ธ๋ž€์ด๋‹ค. ๋ชจ๋ธ ์˜ค์ฐจ๋Š” ๋ชฉํ‘œ ํ† ํฌ๋ฅผ ๊ณ„์‚ฐํ•  ๋•Œ ์˜ค์ฐจ๋ฅผ ์œ ๋ฐœํ•˜๋ฉฐ, ์ด๊ฒƒ์ด ์ œ์–ด ์•ˆ์ •์„ฑ๊ณผ ์„ฑ๋Šฅ์„ ์•ฝํ™”์‹œํ‚ค๊ฒŒ ๋œ๋‹ค. ์™ธ๋ž€์„ ๋‚ด์žฌ ์„ผ์„œ๋งŒ์„ ์ด์šฉํ•˜์—ฌ ์ถ”์ • ๋ฐ ๋ณด์ƒํ•˜๊ธฐ ์œ„ํ•ด ์—ญ๋™์—ญํ•™ ๊ธฐ๋ฐ˜์˜ ์™ธ๋ž€ ๊ด€์ธก๊ธฐ๊ฐ€ ๊ฐœ๋ฐœ๋˜์–ด ์™”๋‹ค. ์™ธ๋ž€ ๊ด€์ธก๊ธฐ๋Š” ์—ญ๋™์—ญํ•™ ๊ณ„์‚ฐ์„ ์œ„ํ•ด ๊ด€์ ˆ ๊ฐ๊ฐ€์†๋„ ์ •๋ณด๊ฐ€ ํ•„์š”ํ•œ๋ฐ, ์ด ๊ฐ’์ด ๊ด€์ ˆ ์œ„์น˜๋ฅผ ๋‘ ๋ฒˆ ๋ฏธ๋ถ„ํ•œ ๊ฐ’์ด๊ธฐ ๋•Œ๋ฌธ์— ์ˆ˜์น˜์ ์ธ ์˜ค์ฐจ๋กœ ๋…ธ์ด์ฆˆํ•ด์ง€๋Š” ๋ฌธ์ œ๊ฐ€ ์žˆ์—ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋ถ€์œ ํ˜• ๊ธฐ์ € ๋กœ๋ด‡์„ ์œ„ํ•œ ์ ‘์ด‰ ์กฐ๊ฑด์ด ๊ณ ๋ ค๋œ ์™ธ๋ž€ ๊ด€์ธก๊ธฐ๊ฐ€ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ ๋กœ๋ด‡์˜ ๊ณ ์ •๋œ ์ ‘์ด‰ ์ง€์ ์— ๋Œ€ํ•œ ๊ธฐ๊ตฌํ•™์ ์ธ ๊ตฌ์† ์กฐ๊ฑด์„ ์ด์šฉํ•˜์—ฌ ๊ด€์ ˆ ๊ฐ๊ฐ€์†๋„ ์˜ค์ฐจ๋ฅผ ์ถ”์ •ํ•œ๋‹ค. ์ถ”์ •๋œ ์˜ค์ฐจ๋ฅผ ๋™์—ญํ•™ ๋ชจ๋ธ์— ๋ฐ˜์˜ํ•˜์—ฌ ์™ธ๋ž€ ํ† ํฌ๋ฅผ ๊ณ„์‚ฐํ•จ์œผ๋กœ์จ ์ €์—ญ ํ†ต๊ณผ ํ•„ํ„ฐ ์„ฑ๋Šฅ์— ๋Œ€ํ•œ ์˜์กด๋„๋ฅผ ์ค„์ผ ์ˆ˜ ์žˆ๋‹ค. ํ† ํฌ ๊ธฐ๋ฐ˜ ์ œ์–ด์˜ ์ •๋ฐ€ ์ œ์–ด ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๋Š” ๋˜ ๋‹ค๋ฅธ ์ด์œ  ์ค‘ ํ•˜๋‚˜๋Š” ํ† ํฌ ๋Œ€์—ญํญ ์ œํ•œ์ด๋‹ค. ํ† ํฌ ๋Œ€์—ญํญ์€ ๊ตฌ๋™๊ธฐ์— ์ „๋‹ฌ๋˜๋Š” ์ž…๋ ฅ ํ† ํฌ์™€ ์‹ค์ œ ๋งํฌ์— ์ „๋‹ฌ๋˜๋Š” ํ† ํฌ์™€์˜ ๊ด€๊ณ„๋กœ ๊ฒฐ์ •๋œ๋‹ค. ํ† ํฌ ๋Œ€์—ญํญ์€ ๊ตฌ๋™๊ธฐ ๋‚ด๋ถ€์˜ ํ† ํฌ ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„, ๊ตฌ๋™๊ธฐ ๋™์—ญํ•™, ๊ด€์ ˆ ํƒ„์„ฑ ๋“ฑ์˜ ์š”์ธ๋“ค์— ์˜ํ•ด ์ œํ•œ๋  ์ˆ˜ ์žˆ๋Š”๋ฐ ์ด๊ฒƒ์ด ์ œ์–ด ์•ˆ์ •์„ฑ ๋ฐ ์„ฑ๋Šฅ์„ ๊ฐ์†Œ์‹œํ‚จ๋‹ค. ์ž‘์—… ๊ณต๊ฐ„ ์ œ์–ด๋Š” ํŠนํžˆ ์ด ๋ฌธ์ œ์— ์ทจ์•ฝํ•œ๋ฐ, ๋Œ€์—ญํญ์ด ์ œํ•œ๋œ ๊ตฌ๋™๊ธฐ ํ•˜๋‚˜๊ฐ€ ๊ทธ์™€ ์—ฐ๊ด€๋œ ๋ชจ๋“  ์ž‘์—… ๊ณต๊ฐ„์˜ ์ œ์–ด ์„ฑ๋Šฅ์„ ๊ฐ์†Œ์‹œํ‚ฌ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ž‘์—… ๊ณต๊ฐ„ ์ œ์–ด๊ธฐ์—์„œ ์„ฑ๋Šฅ์ด ๋‚ฎ์€ ๊ตฌ๋™๊ธฐ์˜ ์‚ฌ์šฉ์„ ์ œํ•œํ•˜๊ธฐ ์œ„ํ•œ ์ง๊ด€์ ์ธ ์ „๋žต์ด ์ œ์•ˆ๋˜์—ˆ๋‹ค. ๊ธฐ๋ณธ ์ปจ์…‰์€ ์ž‘์—… ์ œ์–ด๋ฅผ ์œ„ํ•œ ํ† ํฌ ์†”๋ฃจ์…˜์— ์„ฑ๋Šฅ์ด ์ข‹์€ ๊ด€์ ˆ์—๋งŒ ์ถ”๊ฐ€์ ์œผ๋กœ ํ† ํฌ ์†”๋ฃจ์…˜์„ ๋”ํ•ด๋‚˜๊ฐ€๋Š” ๊ฒƒ์œผ๋กœ, ์ด๊ฒƒ์€ ๊ฐ ๊ด€์ ˆ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ ๊ณ ๋ ค๋œ ํ† ํฌ ์†”๋ฃจ์…˜์ด ๋˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. ์„ฑ๋Šฅ์ด ๋‚ฎ์€ ๊ตฌ๋™๊ธฐ์˜ ์‚ฌ์šฉ์„ ์ œํ•œํ•จ์œผ๋กœ์จ ํ† ํฌ ์ „๋‹ฌ ์˜ค์ฐจ๊ฐ€ ์ค„์–ด๋“ค๊ณ  ์ž‘์—… ์„ฑ๋Šฅ์ด ํฌ๊ฒŒ ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ํ•™์œ„ ๋…ผ๋ฌธ์˜ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋“ค์€ 12์ž์œ ๋„ ์ด์กฑ ๋ณดํ–‰ ๋กœ๋ด‡ DYROS-RED์™€ 7์ž์œ ๋„ ๋กœ๋ด‡ ํŒ” Franka Emika Panda๋ฅผ ์ด์šฉํ•œ ์‹คํ—˜์„ ํ†ตํ•ด ๊ฒ€์ฆ๋˜์—ˆ๋‹ค.1 INTRODUCTION 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contributions of Thesis . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Overview of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 BACKGROUNDS 6 2.1 Operational Space Control . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Dynamics Formulation . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 Fixed-Base Dynamics . . . . . . . . . . . . . . . . . . . . 9 2.2.1.1 Joint Space Formulation . . . . . . . . . . . . . 9 2.2.1.2 Operational Space Formulation . . . . . . . . . . 11 2.2.2 Floating-Base Dynamics . . . . . . . . . . . . . . . . . . . 12 2.2.2.1 Joint Space Formulation . . . . . . . . . . . . . 12 2.2.2.2 Operational Space Formulation . . . . . . . . . . 14 2.3 Position Tracking via PD Control . . . . . . . . . . . . . . . . . . 17 2.3.1 Torque Solution . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.2 Orientation Control . . . . . . . . . . . . . . . . . . . . . 19 3 CONTACT-CONSISTENT DISTURBANCE OBSERVER FOR FLOATING-BASE ROBOTS 22 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2 Momentum-Based Disturbance Observer . . . . . . . . . . . . . . 24 3.3 The Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4.2 External Force Estimation . . . . . . . . . . . . . . . . . . 33 3.4.3 Internal Disturbance Rejection . . . . . . . . . . . . . . . 35 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4 OPERATIONAL SPACE CONTROL UNDER ACTUATOR BANDWIDTH LIMITATION 40 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2 The Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2.1 General Concepts . . . . . . . . . . . . . . . . . . . . . . . 43 4.2.2 OSF-Based Torque Solution . . . . . . . . . . . . . . . . . 45 4.2.3 Comparison With a Typical Method . . . . . . . . . . . . 47 4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.4 Comparison With Other Approaches . . . . . . . . . . . . . . . . 61 4.4.1 Controller Formulation . . . . . . . . . . . . . . . . . . . . 62 4.4.1.1 The Proposed Method . . . . . . . . . . . . . . . 62 4.4.1.2 The OSF Controller . . . . . . . . . . . . . . . . 62 4.4.1.3 The OSF-Filter Controller . . . . . . . . . . . . 62 4.4.1.4 The OSF-Joint Controller . . . . . . . . . . . . . 67 4.4.1.5 The Joint Controller . . . . . . . . . . . . . . . . 68 4.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.5 Frequency Response of Joint Torque . . . . . . . . . . . . . . . . 72 4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5 CONCLUSION 85 Abstract (In Korean) 100๋ฐ•

    Evolution of Grasping Behaviour in Anthropomorphic Robotic Arms with Embodied Neural Controllers

    Get PDF
    The works reported in this thesis focus upon synthesising neural controllers for anthropomorphic robots that are able to manipulate objects through an automatic design process based on artificial evolution. The use of Evolutionary Robotics makes it possible to reduce the characteristics and parameters specified by the designer to a minimum, and the robotโ€™s skills evolve as it interacts with the environment. The primary objective of these experiments is to investigate whether neural controllers that are regulating the state of the motors on the basis of the current and previously experienced sensors (i.e. without relying on an inverse model) can enable the robots to solve such complex tasks. Another objective of these experiments is to investigate whether the Evolutionary Robotics approach can be successfully applied to scenarios that are significantly more complex than those to which it is typically applied (in terms of the complexity of the robotโ€™s morphology, the size of the neural controller, and the complexity of the task). The obtained results indicate that skills such as reaching, grasping, and discriminating among objects can be accomplished without the need to learn precise inverse internal models of the arm/hand structure. This would also support the hypothesis that the human central nervous system (cns) does necessarily have internal models of the limbs (not excluding the fact that it might possess such models for other purposes), but can act by shifting the equilibrium points/cycles of the underlying musculoskeletal system. Consequently, the resulting controllers of such fundamental skills would be less complex. Thus, the learning of more complex behaviours will be easier to design because the underlying controller of the arm/hand structure is less complex. Moreover, the obtained results also show how evolved robots exploit sensory-motor coordination in order to accomplish their tasks

    Evolution of Prehension Ability in an Anthropomorphic Neurorobotic Arm

    Get PDF
    In this paper, we show how a simulated anthropomorphic robotic arm controlled by an artificial neural network can develop effective reaching and grasping behaviour through a trial and error process in which the free parameters encode the control rules which regulate the fine-grained interaction between the robot and the environment and variations of the free parameters are retained or discarded on the basis of their effects at the level of the global behaviour exhibited by the robot situated in the environment. The obtained results demonstrate how the proposed methodology allows the robot to produce effective behaviours thanks to its ability to exploit the morphological properties of the robot's body (i.e. its anthropomorphic shape, the elastic properties of its muscle-like actuators and the compliance of its actuated joints) and the properties which arise from the physical interaction between the robot and the environment mediated by appropriate control rules

    Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning

    Get PDF
    The next generation of intelligent robots will need to be able to plan reaches. Not just ballistic point to point reaches, but reaches around things such as the edge of a table, a nearby human, or any other known object in the robotโ€™s workspace. Planning reaches may seem easy to us humans, because we do it so intuitively, but it has proven to be a challenging problem, which continues to limit the versatility of what robots can do today. In this document, I propose a novel intrinsically motivated RL system that draws on both Path/Motion Planning and Reactive Control. Through Reinforcement Learning, it tightly integrates these two previously disparate approaches to robotics. The RL system is evaluated on a task, which is as yet unsolved by roboticists in practice. That is to put the palm of the iCub humanoid robot on arbitrary target objects in its workspace, start- ing from arbitrary initial configurations. Such motions can be generated by planning, or searching the configuration space, but this typically results in some kind of trajectory, which must then be tracked by a separate controller, and such an approach offers a brit- tle runtime solution because it is inflexible. Purely reactive systems are robust to many problems that render a planned trajectory infeasible, but lacking the capacity to search, they tend to get stuck behind constraints, and therefore do not replace motion planners. The planner/controller proposed here is novel in that it deliberately plans reaches without the need to track trajectories. Instead, reaches are composed of sequences of reactive motion primitives, implemented by my Modular Behavioral Environment (MoBeE), which provides (fictitious) force control with reactive collision avoidance by way of a realtime kinematic/geometric model of the robot and its workspace. Thus, to the best of my knowledge, mine is the first reach planning approach to simultaneously offer the best of both the Path/Motion Planning and Reactive Control approaches. By controlling the real, physical robot directly, and feeling the influence of the con- straints imposed by MoBeE, the proposed system learns a stochastic model of the iCubโ€™s configuration space. Then, the model is exploited as a multiple query path planner to find sensible pre-reach poses, from which to initiate reaching actions. Experiments show that the system can autonomously find practical reaches to target objects in workspace and offers excellent robustness to changes in the workspace configuration as well as noise in the robotโ€™s sensory-motor apparatus

    Ecological active vision: four bio-inspired principles to integrate bottom-up and adaptive top-down attention tested with a simple camera-arm robot

    Get PDF
    Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture ("BITPIC") to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob "objects." The results show that the architecture solves the problems, and hence the tasks, very ef?ciently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions

    A Motion Planning Approach to Automatic Obstacle Avoidance during Concentric Tube Robot Teleoperation

    Get PDF
    Abstract-Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot's shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot's tip. However, the robot's unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot's shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles
    • โ€ฆ
    corecore