112,997 research outputs found

    Real-Time Motion Adaptation using Relative Distance Space Representation

    Get PDF

    Gaussian-Process-based Robot Learning from Demonstration

    Full text link
    Endowed with higher levels of autonomy, robots are required to perform increasingly complex manipulation tasks. Learning from demonstration is arising as a promising paradigm for transferring skills to robots. It allows to implicitly learn task constraints from observing the motion executed by a human teacher, which can enable adaptive behavior. We present a novel Gaussian-Process-based learning from demonstration approach. This probabilistic representation allows to generalize over multiple demonstrations, and encode variability along the different phases of the task. In this paper, we address how Gaussian Processes can be used to effectively learn a policy from trajectories in task space. We also present a method to efficiently adapt the policy to fulfill new requirements, and to modulate the robot behavior as a function of task variability. This approach is illustrated through a real-world application using the TIAGo robot.Comment: 8 pages, 10 figure

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Optimal measurement of visual motion across spatial and temporal scales

    Full text link
    Sensory systems use limited resources to mediate the perception of a great variety of objects and events. Here a normative framework is presented for exploring how the problem of efficient allocation of resources can be solved in visual perception. Starting with a basic property of every measurement, captured by Gabor's uncertainty relation about the location and frequency content of signals, prescriptions are developed for optimal allocation of sensors for reliable perception of visual motion. This study reveals that a large-scale characteristic of human vision (the spatiotemporal contrast sensitivity function) is similar to the optimal prescription, and it suggests that some previously puzzling phenomena of visual sensitivity, adaptation, and perceptual organization have simple principled explanations.Comment: 28 pages, 10 figures, 2 appendices; in press in Favorskaya MN and Jain LC (Eds), Computer Vision in Advanced Control Systems using Conventional and Intelligent Paradigms, Intelligent Systems Reference Library, Springer-Verlag, Berli

    Two brains in action: joint-action coding in the primate frontal cortex

    Get PDF
    Daily life often requires the coordination of our actions with those of another partner. After sixty years (1968-2018) of behavioral neurophysiology of motor control, the neural mechanisms which allow such coordination in primates are unknown. We studied this issue by recording cell activity simultaneously from dorsal premotor cortex (PMd) of two male interacting monkeys trained to coordinate their hand forces to achieve a common goal. We found a population of 'joint-action cells' that discharged preferentially when monkeys cooperated in the task. This modulation was predictive in nature, since in most cells neural activity led in time the changes of the "own" and of the "other" behavior. These neurons encoded the joint-performance more accurately than 'canonical action-related cells', activated by the action per se, regardless of the individual vs. interactive context. A decoding of joint-action was obtained by combining the two brains activities, using cells with directional properties distinguished from those associated to the 'solo' behaviors. Action observation-related activity studied when one monkey observed the consequences of the partner's behavior, i.e. the cursor's motion on the screen, did not sharpen the accuracy of 'joint-action cells' representation, suggesting that it plays no major role in encoding joint-action. When monkeys performed with a non-interactive partner, such as a computer, 'joint-action cells' representation of the "other" (non-cooperative) behavior was significantly degraded. These findings provide evidence of how premotor neurons integrate the time-varying representation of the self-action with that of a co-actor, thus offering a neural substrate for successful visuo-motor coordination between individuals.SIGNIFICANT STATEMENTThe neural bases of inter-subject motor coordination were studied by recording cell activity simultaneously from the frontal cortex of two interacting monkeys, trained to coordinate their hand forces to achieve a common goal. We found a new class of cells, preferentially active when the monkeys cooperated, rather than when the same action was performed individually. These 'joint-action neurons' offered a neural representation of joint-behaviors by far more accurate than that provided by the canonical action-related cells, modulated by the action per se regardless of the individual/interactive context. A neural representation of joint-performance was obtained by combining the activity recorded from the two brains. Our findings offer the first evidence concerning neural mechanisms subtending interactive visuo-motor coordination between co-acting agents

    Method and apparatus for configuration control of redundant robots

    Get PDF
    A method and apparatus to control a robot or manipulator configuration over the entire motion based on augmentation of the manipulator forward kinematics is disclosed. A set of kinematic functions is defined in Cartesian or joint space to reflect the desirable configuration that will be achieved in addition to the specified end-effector motion. The user-defined kinematic functions and the end-effector Cartesian coordinates are combined to form a set of task-related configuration variables as generalized coordinates for the manipulator. A task-based adaptive scheme is then utilized to directly control the configuration variables so as to achieve tracking of some desired reference trajectories throughout the robot motion. This accomplishes the basic task of desired end-effector motion, while utilizing the redundancy to achieve any additional task through the desired time variation of the kinematic functions. The present invention can also be used for optimization of any kinematic objective function, or for satisfaction of a set of kinematic inequality constraints, as in an obstacle avoidance problem. In contrast to pseudoinverse-based methods, the configuration control scheme ensures cyclic motion of the manipulator, which is an essential requirement for repetitive operations. The control law is simple and computationally very fast, and does not require either the complex manipulator dynamic model or the complicated inverse kinematic transformation. The configuration control scheme can alternatively be implemented in joint space

    Approaching stimuli bias attention in numerical space

    Get PDF
    Increasing evidence suggests that common mechanisms underlie the direction of attention in physical space and numerical space, along the mental number line. The small leftward bias (pseudoneglect) found on paper-and-pencil line bisection is also observed when participants ‘bisect’ number pairs, estimating (without calculating) the number midway between two others. Here we investigated the effect of stimulus motion on attention in numerical space. A two-frame apparent motion paradigm manipulating stimulus size was used to produce the impression that pairs of numbers were approaching (size increase from first to second frame), receding (size decrease), or not moving (no size change). The magnitude of pseudoneglect increased for approaching numbers, even when the final stimulus size was held constant. This result is consistent with previous findings that pseudoneglect in numerical space (as in physical space) increases as stimuli are brought closer to the participant. It also suggests that the perception of stimulus motion modulates attention over the mental number line and provides further support for a connection between the neural representations of physical space and number
    • 

    corecore