29 research outputs found

    Explaining the neural activity distribution associated with discrete movement sequences:Evidence for parallel functional systems

    Get PDF
    To explore the effects of practice we scanned participants with fMRI while they were performing four-key unfamiliar and familiar sequences, and compared the associated activities relative to simple control sequences. On the basis of a recent cognitive model of sequential motor behavior (C-SMB), we propose that the observed neural activity would be associated with three functional networks that can operate in parallel and that allow (a) responding to stimuli in a reaction mode, (b) sequence execution using spatial sequence representations in a central-symbolic mode, and (c) sequence execution using motor chunk representations in a chunking mode. On the basis of this model and findings in the literature, we predicted which neural areas would be active during execution of the unfamiliar and familiar keying sequences. The observed neural activities were largely in line with our predictions, and allowed functions to be attributed to the active brain areas that fit the three above functional systems. The results corroborate C-SMB’s assumption that at advanced skill levels the systems executing motor chunks and translating key-specific stimuli are racing to trigger individual responses. They further support recent behavioral indications that spatial sequence representations continue to be used

    I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    Get PDF
    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times

    Vestibular function in the temporal and parietal cortex: distinct velocity and inertial processing pathways

    Get PDF
    A number of behavioural and neuroimaging studies have reported converging data in favour of a cortical network for vestibular function, distributed between the temporo-parietal cortex and the prefrontal cortex in the primate. In this review, we focus on the role of the cerebral cortex in visuo-vestibular integration including the motion sensitive temporo-occipital areas i.e. the middle superior temporal area (MST) and the parietal cortex. Indeed these two neighbouring cortical regions, though they both receive combined vestibular and visual information, have distinct implications in vestibular function. In sum, this review of the literature leads to the idea of two separate cortical vestibular sub-systems forming (1) a velocity pathway including MST and direct descending pathways on vestibular nuclei. As it receives well defined visual and vestibular velocity signals, this pathway is likely involved in heading perception and rapid top-down regulation of eye/head coordination and (2) an inertial processing pathway involving the parietal cortex in connection with the subcortical vestibular nuclei complex responsible for velocity storage integration. This vestibular cortical pathway would be implicated in high order multimodal integration and cognitive functions, including world space and self- referential processing

    Effects of Connectivity on Narrative Temporal Processing in Structured Reservoir Computin

    No full text
    International audienceComputational models of language are having an increasing impact in understanding the neural bases of language processing in humans. A recent model of cortical dynamics based on reservoir computing was able to account for temporal aspects of human narrative processing as revealed by fMRI. In this context the current research introduces a form of structured reservoir computing, where network dynamics are further constrained by the connectivity architecture in order to begin to explain large scale hierarchical network properties of human cortical activity during narrative comprehension. Cortical processing takes place at different time scales depending on the position in a "hierarchy" from posterior sensory input areas to higher level associative frontal cortical areas. This phenomena is likely related to the cortical connectivity architecture. Recent studies have identified heterogeneity in this posterior-anterior hierarchy, with certain frontal associative areas displaying a faster narrative integration response than much more posterior areas. We hypothesize that these discontinuities can be due to white matter connectivity that would create shortcuts from fast sensory areas to distant frontal areas. To test this hypothesis, we analysed the white matter connectivity of these areas and discovered clear connectivity patterns in accord with our hypotheses. Based on these observations we performed simulations using reservoir networks with connectivity patterns structured with an exponential distance rule, yielding the sensory-associative hierarchy. We then introduce connectivity shortcuts corresponding to those observed in human anatomy, resulting in frontal areas with unusually fast narrative processing. Using structured reservoir computing we confirmed the hypothesis that topographic position in a cortical hierarchy can be dominated by long distance connections that can bring frontal areas closer to the sensory periphery

    Grammatical verb aspect and event roles in sentence processing

    No full text
    <div><p>Two experiments examine how grammatical verb aspect constrains our understanding of events. According to linguistic theory, an event described in the perfect aspect (John had opened the bottle) should evoke a mental representation of a finished event with focus on the resulting object, whereas an event described in the imperfective aspect (John was opening the bottle) should evoke a representation of the event as ongoing, including all stages of the event, and focusing all entities relevant to the ongoing action (instruments, objects, agents, locations, etc.). To test this idea, participants saw rebus sentences in the perfect and imperfective aspect, presented one word at a time, self-paced. In each sentence, the instrument and the recipient of the action were replaced by pictures (John was using/had used a *<u>corkscrew</u>* to open the *<u>bottle</u>* at the restaurant). Time to process the two images as well as speed and accuracy on sensibility judgments were measured. Although experimental sentences always made sense, half of the object and instrument pictures did not match the temporal constraints of the verb. For instance, in perfect sentences aspect-congruent trials presented an image of the corkscrew closed (no longer in-use) and the wine bottle fully open. The aspect-incongruent yet still sensible versions either replaced the corkscrew with an in-use corkscrew (open, in-hand) or the bottle image with a half-opened bottle. In this case, the participant would still respond “yes”, but with longer expected response times. A three-way interaction among Verb Aspect, Sentence Role, and Temporal Match on image processing times showed that participants were faster to process images that matched rather than mismatched the aspect of the verb, especially for resulting objects in perfect sentences. A second experiment replicated and extended the results to confirm that this was not due to the placement of the object in the sentence. These two experiments extend previous research, showing how verb aspect drives not only the temporal structure of event representation, but also the focus on specific roles of the event. More generally, the findings of visual match during online sentence-picture processing are consistent with theories of perceptual simulation.</p></div

    Means (and standard deviations) for the response times to object and instrument pictures by verb aspect.

    No full text
    <p>Means (and standard deviations) for the response times to object and instrument pictures by verb aspect.</p

    Means (and standard deviations) for the response times to instrument and object pictures by verb aspect.

    No full text
    <p>Means (and standard deviations) for the response times to instrument and object pictures by verb aspect.</p
    corecore