34 research outputs found

    Reinforcement learning to adjust parametrized motor primitives to new situations

    Get PDF
    Humans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this paper, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6.European Communit

    A new content addressable memory model utilizing high order neurons.

    No full text

    Emergent emotion via neural computational energy conservation on a humanoid robot

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.This paper presents our initial work on how emotion based behaviors may emerge through computational mechanisms. We hold that in addition to basic emotions such as anger and fear that serves bodily well being of the organism, high level emotions such as boredom and affection have evolved to facilitate low cost brain computations. Higher level of emotions can be considered as affective state of the organism or mood, rather than the reflex-like physiologically triggered emotional responses such as fear and anger. In large and complex brains (e.g. primate brains), the neuronal energy consumption for cognition is non-negligible. We propose that for such organisms computational regulatory mechanisms for decision making give rise to behaviors that can be explained by various emotional states. As a proof of concept for this idea, we envision a robotic cognitive system and a select function that we assign a neural cost for its operation. To be concrete, we use a small humanoid robot platform (Darwin-OP) and implement a neural network (Hopfield Network) that allows the robot to recall learned patterns that it sees through its camera. As a model of neural computational energy consumption, we postulate that a change in the state of a neural unit of the network consumes one unit of (neural) energy. Therefore, the total computational energy consumed is determined by the incoming stimuli. The robot is programmed to avoid high energy consumption by showing aversive behavior when the energy consumption is high. Otherwise, the robot demonstrates engaging behavior. For an external observer these responses may be perceived as robot's having certain emotional (affectional) preference for input stimuli. In this article in addition to robot experiments, we also emphasize the biological support for our proposal and provide detailed exposition of biological background and its relevance for the hypothesis that (certain) emotions may emerge through computational mechanisms

    On the co-absence of input terms in higher order neuron representation of boolean functions

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.Boolean functions (BFs) can be represented by using polynomial functions when −1 and +1 are used represent True and False respectively. The coefficients of the representing polynomial can be obtained by exact interpolation given the truth table of the BF. A more parsimonious representation can be obtained with so called polynomial sign representation, where the exact interpolation is relaxed to allow the sign of the polynomial function to represent the BF value of True or False. This corresponds exactly to the higher order neuron or sigma-pi unit model of biological neurons. It is of interest to know what is the minimal set of monomials or input lines that is sufficient to represent a BF. In this study, we approach the problem by investigating the (small) subsets of monomials that cannot be absent as a whole from the representation of a given BF. With numerical investigations, we study low dimensional BFs and introduce a graph representation to visually describe the behavior of the two-element monomial subsets as to whether they cannot be absent from any sign representation. Finally, we prove that for any n-variable BF, any three-element monomial set cannot be absent as a whole if and only if all the pairs from that set has the same property. The results and direction taken in the study may lead to more efficient algorithms for finding higher order neuron representations with close-to-minimal input terms for Boolean functions

    Robotic grasping and manipulation through human visuomotor learning

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.A major goal of robotics research is to develop techniques that allow non-experts to teach robots dexterous skills. In this paper, we report our progress on the development of a framework which exploits human sensorimotor learning capability to address this aim. The idea is to place the human operator in the robot control loop where he/she can intuitively control the robot, and by practice, learn to perform the target task with the robot. Subsequently, by analyzing the robot control obtained by the human, it is possible to design a controller that allows the robot to autonomously perform the task. First, we introduce this framework with the ball-swapping task where a robot hand has to swap the position of the balls without dropping them, and present new analyses investigating the intrinsic dimension of the ballswapping skill obtained through this framework. Then, we present new experiments toward obtaining an autonomous grasp controller on an anthropomorphic robot. In the experiments, the operator directly controls the (simulated) robot using visual feedback to achieve robust grasping with the robot. The data collected is then analyzed for inferring the grasping strategy discovered by the human operator. Finally, a method to generalize grasping actions using the collected data is presented, which allows the robot to autonomously generate grasping actions for different orientations of the target object.the Global COE Program, Center of Human-Friendly Robotics Based on Cognitive Neuroscience at the Ministry of Education, Culture, Sports, Science and Technology, Japan

    Simultaneous human-robot adaptation for effective skill transfer

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.In this paper, we propose and implement a human-in-the loop robot skill synthesis framework that involves simultaneous adaptation of the human and the robot. In this framework, the human demonstrator learns to control the robot in real-time to make it perform a given task. At the same time, the robot learns from the human guided control creating a non-trivial coupled dynamical system. The research question we address is how this system can be tuned to facilitate faster skill transfer or improve the performance level of the transferred skill. In the current paper we report our initial work for the latter. At the beginning of the skill transfer session, the human demonstrator controls the robot exclusively as in teleoperation. As the task performance improves the robot takes increasingly more share in control, eventually reaching full autonomy. The proposed framework is implemented and shown to work on a physical cart-pole setup. To assess whether simultaneous learning has advantage over the standard sequential learning (where the robot learns from the human observation but does not interfere with the control) experiments with two groups of subjects were performed. The results indicate that the final autonomous controller obtained via simultaneous learning has a higher performance measured as the average deviation from the upright posture of the pole.European Commissio

    Simultaneous human-robot adaptation for effective skill transfer

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.In this paper, we propose and implement a human-in-the loop robot skill synthesis framework that involves simultaneous adaptation of the human and the robot. In this framework, the human demonstrator learns to control the robot in real-time to make it perform a given task. At the same time, the robot learns from the human guided control creating a non-trivial coupled dynamical system. The research question we address is how this system can be tuned to facilitate faster skill transfer or improve the performance level of the transferred skill. In the current paper we report our initial work for the latter. At the beginning of the skill transfer session, the human demonstrator controls the robot exclusively as in teleoperation. As the task performance improves the robot takes increasingly more share in control, eventually reaching full autonomy. The proposed framework is implemented and shown to work on a physical cart-pole setup. To assess whether simultaneous learning has advantage over the standard sequential learning (where the robot learns from the human observation but does not interfere with the control) experiments with two groups of subjects were performed. The results indicate that the final autonomous controller obtained via simultaneous learning has a higher performance measured as the average deviation from the upright posture of the pole.European Commissio

    Human-robot co working by HMM based 3D human motion recognition

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.We offer a system for a human-robot cooperation with natural communication in order to make using of robots in human-interactive tasks easier and more effective. This system includes a system with 3D motion capture cameras, and a motion recognition using Hidden Markov Models (HMM). We implemented the solution on a simulation using real data collected for this experiment

    Minimal sign representation of boolean functions: algorithms and exact results for low dimensions

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.Boolean functions (BFs) are central in many fields of engineering and mathematics, such as cryptography, circuit design, and combinatorics. Moreover, they provide a simple framework for studying neural computation mechanisms of the brain. Many representation schemes for BFs exist to satisfy the needs of the domain they are used in. In neural computation, it is of interest to know how many input lines a neuron would need to represent a given BF. A common BF representation to study this is the so-called polynomial sign representation where and 1 are associated with true and false, respectively. The polynomial is treated as a real-valued function and evaluated at its parameters, and the sign of the polynomial is then taken as the function value. The number of input lines for the modeled neuron is exactly the number of terms in the polynomial. This letter investigates the minimum number of terms, that is, the minimum threshold density, that is sufficient to represent a given BF and more generally aims to find the maximum over this quantity for all BFs in a given dimension. With this work, for the first time exact results for four- and five-variable BFs are obtained, and strong bounds for six-variable BFs are derived. In addition, some connections between the sign representation framework and bent functions are derived, which are generally studied for their desirable cryptographic properties

    Algorithms for obtaining parsimonious higher order neurons

    No full text
    Due to copyright restrictions, the access to the full text of this article is only available via subscription.Most neurons in the central nervous system exhibit all-or-none firing behavior. This makes Boolean Functions (BFs) tractable candidates for representing computations performed by neurons, especially at finer time scales, even though BFs may fail to capture some of the richness of neuronal computations such as temporal dynamics. One biologically plausible way to realize BFs is to compute a weighted sum of products of inputs and pass it through a heaviside step function. This representation is called a Higher Order Neuron (HON). A HON can trivially represent any n-variable BF with 2n product terms. There have been several algorithms proposed for obtaining representations with fewer product terms. In this work, we propose improvements over previous algorithms for obtaining parsimonious HON representations and present numerical comparisons. In particular, we improve the algorithm proposed by Sezener and Oztop [1] and cut down its time complexity drastically, and develop a novel hybrid algorithm by combining metaheuristic search and the deterministic algorithm of Oztop
    corecore