53 research outputs found

    Gaze following is modulated by expectations regarding others’ action goals

    Get PDF
    Humans attend to social cues in order to understand and predict others' behavior. Facial expressions and gaze direction provide valuable information to infer others' mental states and intentions. The present study examined the mechanism of gaze following in the context of participants' expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following) was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal). In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated by monitoring eye movements). The essential pattern of results of Experiment 1 was replicated, with the gaze-cueing effect being completely eliminated when the observed agent gazed at an action-incongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent

    Orientation Sensitivity at Different Stages of Object Processing: Evidence from Repetition Priming and Naming

    Get PDF
    An ongoing debate in the object recognition literature centers on whether the shape representations used in recognition are coded in an orientation-dependent or orientation-invariant manner. In this study, we asked whether the nature of the object representation (orientation-dependent vs orientation-invariant) depends on the information-processing stages tapped by the task

    Towards the automated localisation of targets in rapid image-sifting by collaborative brain-computer interfaces

    Get PDF
    The N2pc is a lateralised Event-Related Potential (ERP) that signals a shift of attention towards the location of a potential object of interest. We propose a single-trial target-localisation collaborative Brain-Computer Interface (cBCI) that exploits this ERP to automatically approximate the horizontal position of targets in aerial images. Images were presented by means of the rapid serial visual presentation technique at rates of 5, 6 and 10 Hz. We created three different cBCIs and tested a participant selection method in which groups are formed according to the similarity of participants’ performance. The N2pc that is elicited in our experiments contains information about the position of the target along the horizontal axis. Moreover, combining information from multiple participants provides absolute median improvements in the area under the receiver operating characteristic curve of up to 21% (for groups of size 3) with respect to single-user BCIs. These improvements are bigger when groups are formed by participants with similar individual performance, and much of this effect can be explained using simple theoretical models. Our results suggest that BCIs for automated triaging can be improved by integrating two classification systems: one devoted to target detection and another to detect the attentional shifts associated with lateral targets

    The Spatial and Temporal Construction of Confidence in the Visual Scene

    Get PDF
    Human subjects can report many items of a cluttered field a few hundred milliseconds after stimulus presentation. This memory decays rapidly and after a second only 3 or 4 items can be stored in working memory. Here we compared the dynamics of objective performance with a measure of subjective report and we observed that 1) Objective performance beyond explicit subjective reports (blindsight) was significantly more pronounced within a short temporal interval and within specific locations of the visual field which were robust across sessions 2) High confidence errors (false beliefs) were largely confined to a small spatial window neighboring the cue. The size of this window did not change in time 3) Subjective confidence showed a moderate but consistent decrease with time, independent of all other experimental factors. Our study allowed us to asses quantitatively the temporal and spatial access to an objective response and to subjective reports

    The relation of object naming and other visual speech production tasks: A large scale voxel-based morphometric study.

    Get PDF
    We report a lesion–symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a ‘shared’ component that loaded across all the visual speech production tasks and a ‘unique’ component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual–speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing

    The neural correlates of social attention: automatic orienting to social and nonsocial cues

    Get PDF
    Previous evidence suggests that directional social cues (e.g., eye gaze) cause automatic shifts in attention toward gaze direction. It has been proposed that automatic attentional orienting driven by social cues (social orienting) involves a different neural network from automatic orienting driven by nonsocial cues. However, previous neuroimaging studies on social orienting have only compared gaze cues to symbolic cues, which typically engage top-down mechanisms. Therefore, we directly compared the neural activity involved in social orienting to that involved in purely automatic nonsocial orienting. Twenty participants performed a spatial cueing task consisting of social (gaze) cues and automatic nonsocial (peripheral squares) cues presented at short and long stimulus (cue-to-target) onset asynchronies (SOA), while undergoing fMRI. Behaviorally, a facilitation effect was found for both cue types at the short SOA, while an inhibitory effect (inhibition of return: IOR) was found only for nonsocial cues at the long SOA. Imaging results demonstrated that social and nonsocial cues recruited a largely overlapping fronto-parietal network. In addition, social cueing evoked greater activity in occipito-temporal regions at both SOAs, while nonsocial cueing recruited greater subcortical activity, but only for the long SOA (when IOR was found). A control experiment, including central arrow cues, confirmed that the occipito-temporal activity was at least in part due to the social nature of the cue and not simply to the location of presentation (central vs. peripheral). These results suggest an evolutionary trajectory for automatic orienting, from predominantly subcortical mechanisms for nonsocial orienting to predominantly cortical mechanisms for social orienting

    Bringing the real world into the fMRI scanner: Repetition effects for pictures versus real objects

    Get PDF
    Our understanding of the neural underpinnings of perception is largely built upon studies employing 2-dimensional (2D) planar images. Here we used slow event-related functional imaging in humans to examine whether neural populations show a characteristic repetition-related change in haemodynamic response for real-world 3-dimensional (3D) objects, an effect commonly observed using 2D images. As expected, trials involving 2D pictures of objects produced robust repetition effects within classic object-selective cortical regions along the ventral and dorsal visual processing streams. Surprisingly, however, repetition effects were weak, if not absent on trials involving the 3D objects. These results suggest that the neural mechanisms involved in processing real objects may therefore be distinct from those that arise when we encounter a 2D representation of the same items. These preliminary results suggest the need for further research with ecologically valid stimuli in other imaging designs to broaden our understanding of the neural mechanisms underlying human vision
    corecore