13 research outputs found

    A dynamic neural field approach to natural and efficient human-robot collaboration

    Get PDF
    A major challenge in modern robotics is the design of autonomous robots that are able to cooperate with people in their daily tasks in a human-like way. We address the challenge of natural human-robot interactions by using the theoretical framework of dynamic neural fields (DNFs) to develop processing architectures that are based on neuro-cognitive mechanisms supporting human joint action. By explaining the emergence of self-stabilized activity in neuronal populations, dynamic field theory provides a systematic way to endow a robot with crucial cognitive functions such as working memory, prediction and decision making . The DNF architecture for joint action is organized as a large scale network of reciprocally connected neuronal populations that encode in their firing patterns specific motor behaviors, action goals, contextual cues and shared task knowledge. Ultimately, it implements a context-dependent mapping from observed actions of the human onto adequate complementary behaviors that takes into account the inferred goal of the co-actor. We present results of flexible and fluent human-robot cooperation in a task in which the team has to assemble a toy object from its components.The present research was conducted in the context of the fp6-IST2 EU-IP Project JAST (proj. nr. 003747) and partly financed by the FCT grants POCI/V.5/A0119/2005 and CONC-REEQ/17/2001. We would like to thank Luis Louro, Emanuel Sousa, Flora Ferreira, Eliana Costa e Silva, Rui Silva and Toni Machado for their assistance during the robotic experiment

    Supplementary Material for: Functional Correlates of Increasing Gestural Articulatory Fluency Using a Miniature Second-Language Approach

    No full text
    <b><i>Objectives:</i></b> Gesture-based second languages have become an important tool in the rehabilitation of language-impaired subpopulations. Acquiring the ability to use manual gestures as a means to construct meaningful utterances places unique demands on the brain. This study identified changes in the blood oxygen level-dependent (BOLD) signal associated with the development of gestural fluency using a miniature second-language-based approach. <b><i>Participants and Methods:</i></b> Twelve healthy right-handed adults (19-31 years) were trained to produce sequences of meaningful gestures over a period of 2 weeks. Functional magnetic resonance imaging was used to identify brain regions involved in actual and imagined production of meaningful sentences both before (nonfluent production) and after (fluent production) practice. <b><i>Results:</i></b> Brain areas showing learning-dependent increases in activity associated with the development of fluency included sites associated with language articulation, while learning-related decreases in the BOLD signal were observed in cortical networks associated with motor imagery, and native language processing. <b><i>Conclusion:</i></b> These findings provide novel insights regarding the neural basis of fluency that could inform the design of interventions for treating speech disorders characterized by the loss of fluency

    Anatomical Substrates of Visual and Auditory Miniature Second-language Learning

    Get PDF
    Contains fulltext : 56945.pdf (publisher's version ) (Open Access)Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca's area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca's area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the 'critical period'. The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance.14 p

    A Distributed Left Hemisphere Network Active During Planning of Everyday Tool Use Skills

    No full text
    Contains fulltext : 56986.pdf (publisher's version ) (Closed access)Determining the relationship between mechanisms involved in action planning and/or execution is critical to understanding the neural bases of skilled behaviors, including tool use. Here we report findings from two fMRI studies of healthy, right-handed adults in which an event-related design was used to distinguish regions involved in planning (i.e. identifying, retrieving and preparing actions associated with a familiar tools' uses) versus executing tool use gestures with the dominant right (experiment 1) and non-dominant left (experiment 2) hands. For either limb, planning tool use actions activates a distributed network in the left cerebral hemisphere consisting of: (i) posterior superior temporal sulcus, along with proximal regions of the middle and superior temporal gyri; (ii) inferior frontal and ventral premotor cortices; (iii) two distinct parietal areas, one located in the anterior supramarginal gyrus (SMG) and another in posterior SMG and angular gyrus; and (iv) dorsolateral prefrontal cortex (DLFPC). With the exception of left DLFPC, adjacent and partially overlapping sub-regions of left parietal, frontal and temporal cortex are also engaged during action execution. We suggest that this left lateralized network constitutes a neural substrate for the interaction of semantic and motoric representations upon which meaningful skills depend.15 p

    Anatomical substrates of cooperative joint-action in a continuous motor task: Virtual lifting and balancing

    No full text
    Item does not contain fulltextAn emerging branch of social cognitive neuroscience attempts to unravel the critical cognitive mechanisms that enable humans to engage in joint action. In the current experiment, differences in brain activity in participants engaging in solitary action and joint action were identified using whole brain fMRI while participants performed a virtual bar-balancing task either alone (S), or with the help of a partner in each of two separate joint-action conditions (isomorphic [J(i)] and non-isomorphic [J(n)]). Compared to the performing the task alone, BOLD signal was found to be stronger in both joint-action conditions at specific sites in the human mirror system (MNS). This activation pattern may reflect the demand on participants to simulate the actions of others, integrate their own actions with those of their partners, and compute appropriate responses. Increasing inter-dependence (complementarity) of movements being generated by cooperating individuals (J(n)>J(i)>S) was found to correlate with BOLD signal in the right anterior node of the MNS (pars opercularis), and the area around the right temporoparietal junction (TPJ). These data are relevant to current debates concerning the role of right IFG in complementary action, as well as evolving theories of joint action

    The role of inferior frontal and parietal areas in differentiating meaningful and meaningless object-directed actions

    No full text
    Contains fulltext : 90611.pdf (publisher's version ) (Closed access)Over the past two decades single cell recordings in primates and neuroimaging experiments in humans have uncovered the key proper-ties of visuo-motor mirror neurons located in monkey premotor cortex and parietal cortices as well as homologous areas in the human inferior frontal and inferior parietal cortices which presumably house neurons with similar response properties. One of the most interesting claims regarding the human mirror neuron system (MNS) is that its activity reflects high-level action understanding. If this was the case, one would expect signal in the MNS to differentiate between meaningful and meaningless actions. In the current experiment we tested this prediction using a novel paradigm. Functional magnetic resonance images were collected while participants viewed (i) short films of object-directed actions (ODAs) which were either semantically meaningful, i.e. a hand pressed a stapler or semantically meaningless, i.e. a foot pressed a stapler, (ii) short films of pantomimed actions and (iii) static pictures of objects. Consistent with the notion that the MNS represents high-level action understanding, meaningful and meaningless actions elicited BOLD signal differences at bilateral sites in the supramarginal gyrus (SMG) of the inferior parietal lobule (IPL) where we observed a double dissociation between BOLD response and meaningfullness of actions. Comparison of superadditive responses in the inferior frontal gyrus (IFG) and IPL (supramarginal) regions revealed differential contributions to action understanding. These data further specify the role of specific components of the MNS in understanding object-directed actions

    Internal model deficits impair joint action in children and adolescents with autism spectrum disorders

    No full text
    Contains fulltext : 99436.pdf (publisher's version ) (Closed access)Qualitative differences in social interaction and communication are diagnostic hallmarks in autism spectrum disorders (ASD). The present study investigated the hypothesis that impaired social interaction in ASD reflects a deficit to internally model the behavior of a co-actor. Children and adolescents with ASD and matched controls performed a computerized bar-balancing task in a solo condition (S), and together with another individual in two joint action conditions (J2 and J4) in which they used either two or four hands to control the bar lift. Consistent with predictions derived from the 'internal modelling hypothesis', results from the J2 task indicated that ASD dyads were impaired in predicting the occurrence of their partner's response and failed to coordinate their actions in time. Furthermore, results from the J4 task showed that ASD participants used an adaptive strategy to disambiguate their responses from their partner's by regulating opposite sides of the bar during lifting. These findings provide empirical support of theories positing the existence of an internal modelling deficit in ASD. In addition, our findings suggest that impaired social reciprocal behavior and joint cooperative play exhibited by individuals with ASD may reflect behavioral adaptations to evade conflicting or ambiguous information in social settings.12 p

    What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations

    No full text
    Goal-directed action selection is the problem of what to do next in order to progress towards goal achievement. This problem is computationally more complex in case of joint action settings where two or more agents coordinate their actions in space and time to bring about a common goal: actions performed by one agent influence the action possibilities of the other agents, and ultimately the goal achievement. While humans apparently effortlessly engage in complex joint actions, a number of questions remain to be solved to achieve similar performances in artificial agents: How agents represent and understand actions being performed by others? How this understanding influences the choice of agent’s own future actions? How is the interaction process biased by prior information about the task? What is the role of more abstract cues such as others’ beliefs or intentions? In the last few years, researchers in computational neuroscience have begun investigating how controltheoretic models of individual motor control can be extended to explain various complex social phenomena, including action and intention understanding, imitation and joint action. The two cornerstones of control-theoretic models of motor control are the goal-directed nature of action and a widespread use of internal modeling. Indeed, when the control-theoretic view is applied to the realm of social interactions, it is assumed that inverse and forward internal models used in individual action planning and control are re-enacted in simulation in order to understand others’ actions and to infer their intentions. This motor simulation view of social cognition has been adopted to explain a number of advanced mindreading abilities such as action, intention, and belief recognition, often in contrast with more classical cognitive theories - derived from rationality principles and conceptual theories of others’ minds - that emphasize the dichotomy between action and perception. Here we embrace the idea that implementing mindreading abilities is a necessary step towards a more natural collaboration between humans and robots in joint tasks. To efficiently collaborate, agents need to continuously estimate their teammates’ proximal goals and distal intentions in order to choose what to do next.We present a probabilistic hierarchical architecture for joint action which takes inspiration from the idea of motor simulation above. The architecture models the casual relations between observables (e.g., observed movements) and their hidden causes (e.g., action goals, intentions and beliefs) at two deeply intertwined levels: at the lowest level the same circuitry used to execute my own actions is re-enacted in simulation to infer and predict (proximal) actions performed by my interaction partner, while the highest level encodes more abstract task representations which govern each agent’s observable behavior. Here we assume that the decision of what to do next can be taken by knowing 1) what the current task is and 2) what my teammate is currently doing. While these could be inferred via a costly (and inaccurate) process of inverting the generative model above, given the observed data, we will show how our organization facilitates such an inferential process by allowing agents to share a subset of hidden variables alleviating the need of complex inferential processes, such as explicit task allocation, or sophisticated communication strategies
    corecore