29 research outputs found

    On the development of intention understanding for joint action tasks

    Get PDF
    Our everyday, common sense ability to discern the intentions of others’ from their motions is fundamental for a successful cooperation in joint action tasks. In this paper we address in a modeling study the question of how the ability to understand complex goal-directed action sequences may develop during learning and practice. The model architecture reflects recent neurophysiological findings that suggest the existence of chains of mirror neurons associated with specific goals. These chains may be activated by external events to simulate the consequences of observed actions. Using the mathematical framework of dynamical neural fields to model the dynamics of different neural populations representing goals, action means and contextual cues, we show that such chains may develop based on a local, Hebbian learning rule. We validate the functionality of the learned model in a joint action task in which an observer robot infers the intention of a partner to chose a complementary action sequence.Fundação para a CiĂȘncia e a Tecnologia (FCT)European Commission (EC

    Neuronal Chains for Actions in the Parietal Lobe: A Computational Model

    Get PDF
    The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention)

    Sentence Processing: Linking Language to Motor Chains

    Get PDF
    A growing body of evidence in cognitive science and neuroscience points towards the existence of a deep interconnection between cognition, perception and action. According to this embodied perspective language is grounded in the sensorimotor system and language understanding is based on a mental simulation process (Jeannerod, 2007; Gallese, 2008; Barsalou, 2009). This means that during action words and sentence comprehension the same perception, action, and emotion mechanisms implied during interaction with objects are recruited. Among the neural underpinnings of this simulation process an important role is played by a sensorimotor matching system known as the mirror neuron system (Rizzolatti and Craighero, 2004). Despite a growing number of studies, the precise dynamics underlying the relation between language and action are not yet well understood. In fact, experimental studies are not always coherent as some report that language processing interferes with action execution while others find facilitation. In this work we present a detailed neural network model capable of reproducing experimentally observed influences of the processing of action-related sentences on the execution of motor sequences. The proposed model is based on three main points. The first is that the processing of action-related sentences causes the resonance of motor and mirror neurons encoding the corresponding actions. The second is that there exists a varying degree of crosstalk between neuronal populations depending on whether they encode the same motor act, the same effector or the same action-goal. The third is the fact that neuronal populations’ internal dynamics, which results from the combination of multiple processes taking place at different time scales, can facilitate or interfere with successive activations of the same or of partially overlapping pools

    Learning through imitation : a biological approach to robotics

    No full text
    Tese de Doutoramento em ElectrĂłnica Industrial (ramo do conhecimento em Automação e Controlo)Fundação para a CiĂȘncia e a Tecnologia (FCT) - LEMI : "Learning to read the motor intention of the other : towards socially intelligence robots" (POCI / V.5 / A0119 /2005)European project - IST-2000-29689 Artesimit "Artefact strutural learning though imitation", IST-003747-IP JASP "Joint of action science and tecnology

    A Programmer-Interpreter Neural Network Architecture for Prefrontal Cognitive Control

    No full text
    There is wide consensus that the prefrontal cortex (PFC) is able to exert cognitive control on behavior by biasing processing toward task-relevant information and by modulating response selection. This idea is typically framed in terms of top-down influences within a cortical control hierarchy, where prefrontal-basal ganglia loops gate multiple input-output channels, which in turn can activate or sequence motor primitives expressed in (pre-)motor cortices. Here we advance a new hypothesis, based on the notion of programmability and an interpreter-programmer computational scheme, on how the PFC can flexibly bias the selection of sensorimotor patterns depending on internal goal and task contexts. In this approach, multiple elementary behaviors representing motor primitives are expressed by a single multi-purpose neural network, which is seen as a reusable area of "recycled" neurons (interpreter). The PFC thus acts as a "programmer" that, without modifying the network connectivity, feeds the interpreter networks with specific input parameters encoding the programs (corresponding to network structures) to be interpreted by the (pre-)motor areas. Our architecture is validated in a standard test for executive function: the 1-2-AX task. Our results show that this computational framework provides a robust, scalable and flexible scheme that can be iterated at different hierarchical layers, supporting the realization of multiple goals. We discuss the plausibility of the "programmer-interpreter" scheme to explain the functioning of prefrontal-(pre)motor cortical hierarchies

    Time, Language and Action - A Unified Long-Term Memory Model for Sensory-Motor Chains and Word Schemata

    No full text
    Action and language are known to be organized as closely-related brain subsystems. An Italian CNR project implemented a computational neural model where the ability to form chains of goal-directed actions and chains of linguistic units relies on a unified memory architecture obeying the same organizing principles

    Time course of the activity (rasters and histograms) of 4 neurons recorded in IPL.

    No full text
    <p>Each one codes a specific motor act, but is active only when the monkey executes the “grasping to eat” sequence. Both rasters and histograms are aligned with the moment in which the monkey touches the object. Beneath the histograms a schematic representation of the corresponding neuronal chain is shown.</p
    corecore