17 research outputs found

    Learning to Reach after Learning to Look: a Study of Autonomy in Learning Sensorimotor Transformations

    Full text link

    Towards endowing collaborative robots with fast learning for minimizing tutors’ demonstrations: what and when to do?

    Get PDF
    Programming by demonstration allows non-experts in robot programming to train the robots in an intuitive manner. However, this learning paradigm requires multiple demonstrations of the same task, which can be time-consuming and annoying for the human tutor. To overcome this limitation, we propose a fast learning system – based on neural dynamics – that permits collaborative robots to memorize sequential information from single task demonstrations by a human-tutor. Important, the learning system allows not only to memorize long sequences of sub-goals in a task but also the time interval between them. We implement this learning system in Sawyer (a collaborative robot from Rethink Robotics) and test it in a construction task, where the robot observes several human-tutors with different preferences on the sequential order to perform the task and different behavioral time scales. After learning, memory recall (of what and when to do a sub-task) allows the robot to instruct inexperienced human workers, in a particular human-centered task scenario.POFC - Programa Operacional Temático Factores de Competitividade(POCI-01-0247-FEDER-024541

    Universal neural field computation

    Full text link
    Turing machines and G\"odel numbers are important pillars of the theory of computation. Thus, any computational architecture needs to show how it could relate to Turing machines and how stable implementations of Turing computation are possible. In this chapter, we implement universal Turing computation in a neural field environment. To this end, we employ the canonical symbologram representation of a Turing machine obtained from a G\"odel encoding of its symbolic repertoire and generalized shifts. The resulting nonlinear dynamical automaton (NDA) is a piecewise affine-linear map acting on the unit square that is partitioned into rectangular domains. Instead of looking at point dynamics in phase space, we then consider functional dynamics of probability distributions functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory (DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. We call the resulting representation \emph{dynamic field automaton}.Comment: 21 pages; 6 figures. arXiv admin note: text overlap with arXiv:1204.546

    A dynamic neural field approach to natural and efficient human-robot collaboration

    Get PDF
    A major challenge in modern robotics is the design of autonomous robots that are able to cooperate with people in their daily tasks in a human-like way. We address the challenge of natural human-robot interactions by using the theoretical framework of dynamic neural fields (DNFs) to develop processing architectures that are based on neuro-cognitive mechanisms supporting human joint action. By explaining the emergence of self-stabilized activity in neuronal populations, dynamic field theory provides a systematic way to endow a robot with crucial cognitive functions such as working memory, prediction and decision making . The DNF architecture for joint action is organized as a large scale network of reciprocally connected neuronal populations that encode in their firing patterns specific motor behaviors, action goals, contextual cues and shared task knowledge. Ultimately, it implements a context-dependent mapping from observed actions of the human onto adequate complementary behaviors that takes into account the inferred goal of the co-actor. We present results of flexible and fluent human-robot cooperation in a task in which the team has to assemble a toy object from its components.The present research was conducted in the context of the fp6-IST2 EU-IP Project JAST (proj. nr. 003747) and partly financed by the FCT grants POCI/V.5/A0119/2005 and CONC-REEQ/17/2001. We would like to thank Luis Louro, Emanuel Sousa, Flora Ferreira, Eliana Costa e Silva, Rui Silva and Toni Machado for their assistance during the robotic experiment

    Simultaneous planning and action: neural-dynamic sequencing of elementary behaviors in robot navigation

    Full text link
    A technique for Simultaneous Planning and Action (SPA) based on Dynamic Field Theory (DFT) is presented. The model builds on previous workon representation of sequential behavior as attractors in dynamic neural fields. Here, we demonstrate how chains of competing attractors can be used to represent dynamic plans towards a goal state. The presentwork can be seen as an addition to a growing body of work that demonstratesthe role of DFT as a bridge between low-level reactive approachesand high-level symbol processing mechanisms. The architecture is evaluatedon a set of planning problems using a simulated e-puck robot, including analysis of the system's behavior in response to noise and temporary blockages ofthe planned route. The system makes no explicit distinction betweenplanning and execution phases, allowing continuous adaptation of the planned path. The proposed architecture exploits the DFT property of stability in relation to noise and changes in the environment. The neural dynamics are also exploited such that stay-or-switch action selection emerges where blockage of a planned path occurs: stay until the transient blockage is removed versus switch to an alternative route to the goal.Neural Dynamics, 7:th framework of the EU, #27024

    Learning the condition of satisfaction of an elementary behavior in dynamic field theory

    Get PDF
    In order to proceed along an action sequence, an autonomous agent has to recognize that the intended final condition of the previous action has been achieved. In previous work, we have shown how a sequence of actions can be generated by an embodied agent using a neural-dynamic architecture for behavioral organization, in which each action has an intention and condition of satisfaction. These components are represented by dynamic neural fields, and are coupled to motors and sensors of the robotic agent.Here,we demonstratehowthemappings between intended actions and their resulting conditions may be learned, rather than pre-wired.We use reward-gated associative learning, in which, over many instances of externally validated goal achievement, the conditions that are expected to result with goal achievement are learned. After learning, the external reward is not needed to recognize that the expected outcome has been achieved. This method was implemented, using dynamic neural fields, and tested on a real-world E-Puck mobile robot and a simulated NAO humanoid robot
    corecore