10,845 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Facial motion perception in autism spectrum disorder and neurotypical controls

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University LondonFacial motion provides an abundance of information necessary for mediating social communication. Emotional expressions, head rotations and eye-gaze patterns allow us to extract categorical and qualitative information from others (Blake & Shiffrar, 2007). Autism Spectrum Disorder (ASD) is a neurodevelopmental condition characterised by a severe impairment in social cognition. One of the causes may be related to a fundamental deficit in perceiving human movement (Herrington et al., (2007). This hypothesis was investigated more closely within the current thesis. In neurotypical controls, the visual processing of facial motion was analysed via EEG alpha waves. Participants were tested on their ability to discriminate between successive animations (exhibiting rigid and nonrigid motion). The appearance of the stimuli remained constant over trials, meaning decisions were based solely on differential movement patterns. The parieto-occipital region was specifically selective to upright facial motion while the occipital cortex responded similarly to natural and manipulated faces. Over both regions, a distinct pattern of activity in response to upright faces was characterised by a transient decrease and subsequent increase in neural processing (Girges et al., 2014). These results were further supported by an fMRI study which showed sensitivity of the superior temporal sulcus (STS) to perceived facial movements relative to inanimate and animate stimuli. The ability to process information from dynamic faces was assessed in ASD. Participants were asked to recognise different sequences, unfamiliar identities and genders from facial motion captures. Stimuli were presented upright and inverted in order to assess configural processing. Relative to the controls, participants with ASD were significantly impaired on all three tasks and failed to show an inversion effect (O'Brien et al., 2014). Functional neuroimaging revealed atypical activities in the visual cortex, STS and fronto-parietal regions thought to contain mirror neurons in participants with ASD. These results point to a deficit in the visual processing of facial motion, which in turn may partly cause social communicative impairments in ASD

    Brain Areas Active during Visual Perception of Biological Motion

    Get PDF
    AbstractTheories of vision posit that form and motion are represented by neural mechanisms segregated into functionally and anatomically distinct pathways. Using point-light animations of biological motion, we examine the extent to which form and motion pathways are mutually involved in perceiving figures depicted by the spatio-temporal integration of local motion components. Previous work discloses that viewing biological motion selectively activates a region on the posterior superior temporal sulcus (STSp). Here we report that the occipital and fusiform face areas (OFA and FFA) also contain neural signals capable of differentiating biological from nonbiological motion. EBA and LOC, although involved in perception of human form, do not contain neural signals selective for biological motion. Our results suggest that a network of distributed neural areas in the form and motion pathways underlie the perception of biological motion

    Event-related alpha suppression in response to facial motion

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors. © 2014 Girges et al

    Using action understanding to understand the left inferior parietal cortex in the human brain

    Full text link
    Published in final edited form as: Brain Res. 2014 September 25; 1582: 64–76. doi:10.1016/j.brainres.2014.07.035.Humans have a sophisticated knowledge of the actions that can be performed with objects. In an fMRI study we tried to establish whether this depends on areas that are homologous with the inferior parietal cortex (area PFG) in macaque monkeys. Cells have been described in area PFG that discharge differentially depending upon whether the observer sees an object being brought to the mouth or put in a container. In our study the observers saw videos in which the use of different objects was demonstrated in pantomime; and after viewing the videos, the subject had to pick the object that was appropriate to the pantomime. We found a cluster of activated voxels in parietal areas PFop and PFt and this cluster was greater in the left hemisphere than in the right. We suggest a mechanism that could account for this asymmetry, relate our results to handedness and suggest that they shed light on the human syndrome of apraxia. Finally, we suggest that during the evolution of the hominids, this same pantomime mechanism could have been used to ‘name’ or request objects.We thank Steve Wise for very detailed comments on a draft of this paper. We thank Rogier Mars for help with identifying the areas that were activated in parietal cortex and for comments on a draft of this paper. Finally, we thank Michael Nahhas for help with the imaging figures. This work was supported in part by the NIH grant RO1NS064100 to LMV. (RO1NS064100 - NIH)Accepted manuscrip

    The role of facial movements in emotion recognition

    Get PDF
    Most past research on emotion recognition has used photographs of posed expressions intended to depict the apex of the emotional display. Although these studies have provided important insights into how emotions are perceived in the face, they necessarily leave out any role of dynamic information. In this Review, we synthesize evidence from vision science, affective science and neuroscience to ask when, how and why dynamic information contributes to emotion recognition, beyond the information conveyed in static images. Dynamic displays offer distinctive temporal information such as the direction, quality and speed of movement, which recruit higher-level cognitive processes and support social and emotional inferences that enhance judgements of facial affect. The positive influence of dynamic information on emotion recognition is most evident in suboptimal conditions when observers are impaired and/or facial expressions are degraded or subtle. Dynamic displays further recruit early attentional and motivational resources in the perceiver, facilitating the prompt detection and prediction of others’ emotional states, with benefits for social interaction. Finally, because emotions can be expressed in various modalities, we examine the multimodal integration of dynamic and static cues across different channels, and conclude with suggestions for future research

    Contorted and ordinary body postures in the human brain

    Get PDF
    Social interaction and comprehension of non-verbal behaviour requires a representation of people’s bodies. Research into the neural underpinnings of body representation implicates several brain regions including extrastriate and fusiform body areas (EBA and FBA), superior temporal sulcus (STS), inferior frontal gyrus (IFG) and inferior parietal lobule (IPL). The different roles played by these regions in parsing familiar and unfamiliar body postures remain unclear. We examined the responses of this body observation network to static images of ordinary and contorted postures by using a repetition suppression design in functional neuroimaging. Participants were scanned whilst observing static images of a contortionist or a group of objects in either ordinary or unusual configurations, presented from different viewpoints. Greater activity emerged in EBA and FBA when participants viewed contorted compared to ordinary body postures. Repeated presentation of the same posture from different viewpoints lead to suppressed responses in the fusiform gyrus as well as three regions that are characteristically activated by observing moving bodies, namely STS, IFG and IPL. These four regions did not distinguish the image viewpoint or the plausibility of the posture. Together, these data define a broad cortical network for processing static body postures, including regions classically associated with action observation

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain
    corecore