83,973 research outputs found

    Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module

    Get PDF
    The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project

    Learning Human-Robot Collaboration Insights through the Integration of Muscle Activity in Interaction Motion Models

    Full text link
    Recent progress in human-robot collaboration makes fast and fluid interactions possible, even when human observations are partial and occluded. Methods like Interaction Probabilistic Movement Primitives (ProMP) model human trajectories through motion capture systems. However, such representation does not properly model tasks where similar motions handle different objects. Under current approaches, a robot would not adapt its pose and dynamics for proper handling. We integrate the use of Electromyography (EMG) into the Interaction ProMP framework and utilize muscular signals to augment the human observation representation. The contribution of our paper is increased task discernment when trajectories are similar but tools are different and require the robot to adjust its pose for proper handling. Interaction ProMPs are used with an augmented vector that integrates muscle activity. Augmented time-normalized trajectories are used in training to learn correlation parameters and robot motions are predicted by finding the best weight combination and temporal scaling for a task. Collaborative single task scenarios with similar motions but different objects were used and compared. For one experiment only joint angles were recorded, for the other EMG signals were additionally integrated. Task recognition was computed for both tasks. Observation state vectors with augmented EMG signals were able to completely identify differences across tasks, while the baseline method failed every time. Integrating EMG signals into collaborative tasks significantly increases the ability of the system to recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in our studies. Furthermore, the integration of EMG signals for collaboration also opens the door to a wide class of human-robot physical interactions based on haptic communication that has been largely unexploited in the field.Comment: 7 pages, 2 figures, 2 tables. As submitted to Humanoids 201

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    An oscillatory interference model of grid cell firing

    Get PDF
    We expand upon our proposal that the oscillatory interference mechanism proposed for the phase precession effect in place cells underlies the grid-like firing pattern of dorsomedial entorhinal grid cells (O'Keefe and Burgess (2005) Hippocampus 15:853-866). The original one-dimensional interference model is generalized to an appropriate two-dimensional mechanism. Specifically, dendritic subunits of layer 11 medial entorhinal stellate cells provide multiple linear interference patterns along different directions, with their product determining the firing of the cell. Connection of appropriate speed- and direction- dependent inputs onto dendritic subunits could result from an unsupervised learning rule which maximizes postsynaptic firing (e.g. competitive learning). These inputs cause the intrinsic oscillation of subunit membrane potential to. increase above theta frequency by an amount proportional to the animal's speed of running in the "preferred" direction. The phase difference between this oscillation and a somatic input at theta-frequency essentially integrates velocity so that the interference of the two oscillations reflects distance traveled in the preferred direction. The overall grid pattern is maintained in environmental location by phase reset of the grid cell by place cells receiving sensory input from the environment, and environmental boundaries in particular. We also outline possible variations on the basic model, including the generation of grid-like firing via the interaction of multiple cells rather than via multiple dendritic subunits. Predictions of the interference model are given for the frequency composition of EEG power spectra and temporal autocorrelograms of grid cell firing as functions of the speed and direction of running and the novelty of the environment. (C) 2007 Wiley-Liss, Inc

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Robot control based on qualitative representation of human trajectories

    Get PDF
    A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn
    corecore