2,874 research outputs found
Enhanced robotic hand-eye coordination inspired from human-like behavioral patterns
Robotic hand-eye coordination is recognized as an important skill to deal with complex real environments. Conventional robotic hand-eye coordination methods merely transfer stimulus signals from robotic visual space to hand actuator space. This paper introduces a reverse method: Build another channel that transfers stimulus signals from robotic hand space to visual space. Based on the reverse channel, a human-like behavior pattern: âStop-to-Fixateâ, is imparted to the robot, thereby giving the robot an enhanced reaching ability. A visual processing system inspired by the human retina structure is used to compress visual information so as to reduce the robotâs learning complexity. In addition, two constructive neural networks establish the two sensory delivery channels. The experimental results demonstrate that the robotic system gradually obtains a reaching ability. In particular, when the robotic hand touches an unseen object, the reverse channel successfully drives the visual system to notice the unseen object
Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping
The young infant explores its body, its sensorimotor system, and the
immediately accessible parts of its environment, over the course of a few
months creating a model of peripersonal space useful for reaching and grasping
objects around it. Drawing on constraints from the empirical literature on
infant behavior, we present a preliminary computational model of this learning
process, implemented and evaluated on a physical robot. The learning agent
explores the relationship between the configuration space of the arm, sensing
joint angles through proprioception, and its visual perceptions of the hand and
grippers. The resulting knowledge is represented as the peripersonal space
(PPS) graph, where nodes represent states of the arm, edges represent safe
movements, and paths represent safe trajectories from one pose to another. In
our model, the learning process is driven by intrinsic motivation. When
repeatedly performing an action, the agent learns the typical result, but also
detects unusual outcomes, and is motivated to learn how to make those unusual
results reliable. Arm motions typically leave the static background unchanged,
but occasionally bump an object, changing its static position. The reach action
is learned as a reliable way to bump and move an object in the environment.
Similarly, once a reliable reach action is learned, it typically makes a
quasi-static change in the environment, moving an object from one static
position to another. The unusual outcome is that the object is accidentally
grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically
with the hand. Learning to make grasps reliable is more complex than for
reaches, but we demonstrate significant progress. Our current results are steps
toward autonomous sensorimotor learning of motion, reaching, and grasping in
peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure
Learning to reach and reaching to learn: a unified approach to path planning and reactive control through reinforcement learning
The next generation of intelligent robots will need to be able to plan reaches. Not just ballistic point to point reaches, but reaches around things such as the edge of a table, a nearby human, or any other known object in the robotâs workspace. Planning reaches may seem easy to us humans, because we do it so intuitively, but it has proven to be a challenging problem, which continues to limit the versatility of what robots can do today. In this document, I propose a novel intrinsically motivated RL system that draws on both Path/Motion Planning and Reactive Control. Through Reinforcement Learning, it tightly integrates these two previously disparate approaches to robotics. The RL system is evaluated on a task, which is as yet unsolved by roboticists in practice. That is to put the palm of the iCub humanoid robot on arbitrary target objects in its workspace, start- ing from arbitrary initial configurations. Such motions can be generated by planning, or searching the configuration space, but this typically results in some kind of trajectory, which must then be tracked by a separate controller, and such an approach offers a brit- tle runtime solution because it is inflexible. Purely reactive systems are robust to many problems that render a planned trajectory infeasible, but lacking the capacity to search, they tend to get stuck behind constraints, and therefore do not replace motion planners. The planner/controller proposed here is novel in that it deliberately plans reaches without the need to track trajectories. Instead, reaches are composed of sequences of reactive motion primitives, implemented by my Modular Behavioral Environment (MoBeE), which provides (fictitious) force control with reactive collision avoidance by way of a realtime kinematic/geometric model of the robot and its workspace. Thus, to the best of my knowledge, mine is the first reach planning approach to simultaneously offer the best of both the Path/Motion Planning and Reactive Control approaches. By controlling the real, physical robot directly, and feeling the influence of the con- straints imposed by MoBeE, the proposed system learns a stochastic model of the iCubâs configuration space. Then, the model is exploited as a multiple query path planner to find sensible pre-reach poses, from which to initiate reaching actions. Experiments show that the system can autonomously find practical reaches to target objects in workspace and offers excellent robustness to changes in the workspace configuration as well as noise in the robotâs sensory-motor apparatus
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing
environments, constantly facing novel challenges. Therefore, continuous online
adaptation for lifelong-learning and the need of sample-efficient mechanisms to
adapt to changes in the environment, the constraints, the tasks, or the robot
itself are crucial. In this work, we propose a novel framework for
probabilistic online motion planning with online adaptation based on a
bio-inspired stochastic recurrent neural network. By using learning signals
which mimic the intrinsic motivation signalcognitive dissonance in addition
with a mental replay strategy to intensify experiences, the stochastic
recurrent network can learn from few physical interactions and adapts to novel
environments in seconds. We evaluate our online planning and adaptation
framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is
shown by learning unknown workspace constraints sample-efficiently from few
physical interactions while following given way points.Comment: accepted in Neural Network
Towards adaptive and autonomous humanoid robots: from vision to actions
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions
Object manipulation by a humanoid robot via single camera pose estimation
Humanoid robots are designed to be used in daily life as assistance robots for people. They are expected to fill the jobs that require physical labor. These robots are also considered in healthcare sector. The ultimate goal in humanoid robotics is to reach a point where robots can truly communicate with people, and to be a part of labor force. Usual daily environment of a common person contains objects with different geometric and texture features. Such objects should be easily recognized, located and manipulated by a robot when needed. These tasks require high amount of information from environment. The Computer Vision field interests in extraction and use of visual cues for computer systems. Visual data captured with cameras contains the most of the information needed about the environment for high level tasks relative to the other sensors. Most of the high level tasks on humanoid robots require the target object to be segmented in image and located in the 3D environment. Also, the object should be kept in image so that the information about the object can be retrieved continuously. This can be achieved by gaze control schemes by using visual feedback to drive neck motors of the robot. In this thesis an object manipulation algorithm is proposed for a humanoid robot. A white object with red square marker is used as the target object. The object is segmented by color information. Corners of the red marker is found and used for the pose estimation algorithm and gaze control. The pose information is used for navigation to the object and for the grasping action. The described algorithm is implemented on the humanoid experiment platform SURALP (Sabanci University ReseArch Labaratory Platform)
Computational Methods for Cognitive and Cooperative Robotics
In the last decades design methods in control engineering made substantial progress in
the areas of robotics and computer animation. Nowadays these methods incorporate the
newest developments in machine learning and artificial intelligence. But the problems
of flexible and online-adaptive combinations of motor behaviors remain challenging for
human-like animations and for humanoid robotics. In this context, biologically-motivated
methods for the analysis and re-synthesis of human motor programs provide new insights
in and models for the anticipatory motion synthesis.
This thesis presents the authorâs achievements in the areas of cognitive and developmental robotics, cooperative and humanoid robotics and intelligent and machine learning methods in computer graphics. The first part of the thesis in the chapter âGoal-directed Imitation for Robotsâ considers imitation learning in cognitive and developmental robotics.
The work presented here details the authorâs progress in the development of hierarchical
motion recognition and planning inspired by recent discoveries of the functions of mirror-neuron cortical circuits in primates. The overall architecture is capable of âlearning for
imitationâ and âlearning by imitationâ. The complete system includes a low-level real-time
capable path planning subsystem for obstacle avoidance during arm reaching. The learning-based path planning subsystem is universal for all types of anthropomorphic robot arms, and is capable of knowledge transfer at the level of individual motor acts.
Next, the problems of learning and synthesis of motor synergies, the spatial and spatio-temporal combinations of motor features in sequential multi-action behavior, and the
problems of task-related action transitions are considered in the second part of the thesis
âKinematic Motion Synthesis for Computer Graphics and Roboticsâ. In this part, a new
approach of modeling complex full-body human actions by mixtures of time-shift invariant
motor primitives in presented. The online-capable full-body motion generation architecture
based on dynamic movement primitives driving the time-shift invariant motor synergies
was implemented as an online-reactive adaptive motion synthesis for computer graphics
and robotics applications.
The last chapter of the thesis entitled âContraction Theory and Self-organized Scenarios
in Computer Graphics and Roboticsâ is dedicated to optimal control strategies in multi-agent scenarios of large crowds of agents expressing highly nonlinear behaviors. This last
part presents new mathematical tools for stability analysis and synthesis of multi-agent
cooperative scenarios.In den letzten Jahrzehnten hat die Forschung in den Bereichen der Steuerung und Regelung
komplexer Systeme erhebliche Fortschritte gemacht, insbesondere in den Bereichen
Robotik und Computeranimation. Die Entwicklung solcher Systeme verwendet heutzutage
neueste Methoden und Entwicklungen im Bereich des maschinellen Lernens und der
kĂŒnstlichen Intelligenz. Die flexible und echtzeitfĂ€hige Kombination von motorischen Verhaltensweisen
ist eine wesentliche Herausforderung fĂŒr die Generierung menschenĂ€hnlicher
Animationen und in der humanoiden Robotik. In diesem Zusammenhang liefern biologisch
motivierte Methoden zur Analyse und Resynthese menschlicher motorischer Programme
neue Erkenntnisse und Modelle fĂŒr die antizipatorische Bewegungssynthese.
Diese Dissertation prÀsentiert die Ergebnisse der Arbeiten des Autors im Gebiet der
kognitiven und Entwicklungsrobotik, kooperativer und humanoider Robotersysteme sowie
intelligenter und maschineller Lernmethoden in der Computergrafik. Der erste Teil der
Dissertation im Kapitel âZielgerichtete Nachahmung fĂŒr Roboterâ behandelt das Imitationslernen
in der kognitiven und Entwicklungsrobotik. Die vorgestellten Arbeiten beschreiben
neue Methoden fĂŒr die hierarchische Bewegungserkennung und -planung, die durch
Erkenntnisse zur Funktion der kortikalen Spiegelneuronen-Schaltkreise bei Primaten inspiriert
wurden. Die entwickelte Architektur ist in der Lage, âdurch Imitation zu lernenâ
und âzu lernen zu imitierenâ. Das komplette entwickelte System enthĂ€lt ein echtzeitfĂ€higes
Pfadplanungssubsystem zur Hindernisvermeidung wĂ€hrend der DurchfĂŒhrung von Armbewegungen.
Das lernbasierte Pfadplanungssubsystem ist universell und fĂŒr alle Arten von
anthropomorphen Roboterarmen in der Lage, Wissen auf der Ebene einzelner motorischer
Handlungen zu ĂŒbertragen.
Im zweiten Teil der Arbeit âKinematische Bewegungssynthese fĂŒr Computergrafik und
Robotikâ werden die Probleme des Lernens und der Synthese motorischer Synergien, d.h.
von rÀumlichen und rÀumlich-zeitlichen Kombinationen motorischer Bewegungselemente
bei Bewegungssequenzen und bei aufgabenbezogenen Handlungs ĂŒbergĂ€ngen behandelt.
Es wird ein neuer Ansatz zur Modellierung komplexer menschlicher Ganzkörperaktionen
durch Mischungen von zeitverschiebungsinvarianten Motorprimitiven vorgestellt. Zudem
wurde ein online-fĂ€higer Synthesealgorithmus fĂŒr Ganzköperbewegungen entwickelt, der
auf dynamischen Bewegungsprimitiven basiert, die wiederum auf der Basis der gelernten
verschiebungsinvarianten Primitive konstruiert werden. Dieser Algorithmus wurde fĂŒr
verschiedene Probleme der Bewegungssynthese fĂŒr die Computergrafik- und Roboteranwendungen
implementiert.
Das letzte Kapitel der Dissertation mit dem Titel âKontraktionstheorie und selbstorganisierte
Szenarien in der Computergrafik und Robotikâ widmet sich optimalen Kontrollstrategien
in Multi-Agenten-Szenarien, wobei die Agenten durch eine hochgradig nichtlineare
Kinematik gekennzeichnet sind. Dieser letzte Teil prÀsentiert neue mathematische Werkzeuge
fĂŒr die StabilitĂ€tsanalyse und Synthese von kooperativen Multi-Agenten-Szenarien
- âŠ