114 research outputs found

    Event-related potentials reveal the development of stable face representations from natural variability

    Get PDF
    Natural variability between instances of unfamiliar faces can make it difficult to reconcile two images as the same person. Yet for familiar faces, effortless recognition occurs even with considerable variability between images. To explore how stable face representations develop, we employed incidental learning in the form of a face sorting task. In each trial, multiple images of two facial identities were sorted into two corresponding piles. Following the sort, participants showed evidence of having learnt the faces, performing more accurately on a matching task with seen than unseen identities. Furthermore, ventral temporal event-related potentials were more negative in the N250 time range for previously-seen than previously-unseen identities. These effects appear to demonstrate some degree of abstraction, rather than simple picture learning, as the neurophysiological and behavioural effects were observed with novel images of the previously-seen identities. The results provide evidence of the development of facial representations, allowing a window onto natural mechanisms of face learning

    The psychometric properties of the compassionate love scale and the validation of the English and German 7-item compassion for others scale (COS-7)

    Get PDF
    An increasing body of scientific research on the nature, correlates, and effects of compassion has accrued over recent years. Expert agreement has not yet been reached on the conceptualisation of compassion for others, and existing self-report measures of compassion for others have often lacked psychometric quality and content validity. Recent publications of longer compassion measures represent significant strides towards ameliorating these issues. However, there is a need for psychometrically sound short scales for measuring compassion in time-constrained research settings. To meet this need, one can assess the psychometric qualities of existing scales in order to develop robust short adaptations of such scales. Study 1 (N = 501) empirically assessed the psychometric properties of the widely cited Compassionate Love Scale (CLS) to validate a new short scale of compassion for others (strangers) comprised of items from the CLS – the 7-item Compassion for Others Scale (COS-7). Study 2 (N = 332) addressed the absence of a German measure of compassion for others by validating a German version of the COS-7. The CLS did not display adequate model fit. Both the English and German versions of the COS-7 demonstrated adequate model fit, factor loadings, internal consistency, interpretability, convergent/divergent validity, and no floor/ceiling effects. Findings provide support for the English and German versions of the COS-7 as adequate short scales for measuring compassion for others. The German COS-7 is the first German measure of compassion for others published to date

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    The Role of Gamma-Band Activity in the Representation of Faces: Reduced Activity in the Fusiform Face Area in Congenital Prosopagnosia

    Get PDF
    Congenital prosopagnosia (CP) describes an impairment in face processing that is presumably present from birth. The neuronal correlates of this dysfunction are still under debate. In the current paper, we investigate high-frequent oscillatory activity in response to faces in persons with CP. Such neuronal activity is thought to reflect higher-level representations for faces.Source localization of induced Gamma-Band Responses (iGBR) measured by magnetoencephalography (MEG) was used to establish the origin of oscillatory activity in response to famous and unknown faces which were presented in upright and inverted orientation. Persons suffering from congenital prosopagnosia (CP) were compared to matched controls.Corroborating earlier research, both groups revealed amplified iGBR in response to upright compared to inverted faces predominately in a time interval between 170 and 330 ms and in a frequency range from 50-100 Hz. Oscillatory activity upon known faces was smaller in comparison to unknown faces, suggesting a "sharpening" effect reflecting more efficient processing for familiar stimuli. These effects were seen in a wide cortical network encompassing temporal and parietal areas involved in the disambiguation of homogenous stimuli such as faces, and in the retrieval of semantic information. Importantly, participants suffering from CP displayed a strongly reduced iGBR in the left fusiform area compared to control participants.In sum, these data stress the crucial role of oscillatory activity for face representation and demonstrate the involvement of a distributed occipito-temporo-parietal network in generating iGBR. This study also provides the first evidence that persons suffering from an agnosia actually display reduced gamma band activity. Finally, the results argue strongly against the view that oscillatory activity is a mere epiphenomenon brought fourth by rapid eye-movements (micro saccades)

    The effects of stimulus complexity on the preattentive processing of self-generated and nonself voices: an ERP study

    Get PDF
    The ability to differentiate one's own voice from the voice of somebody else plays a critical role in successful verbal self-monitoring processes and in communication. However, most of the existing studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated voice processing remain largely unknown. In this study, we investigated the effects of stimulus complexity on the preattentive processing of self and nonself voice stimuli. Event-related potentials (ERPs) were recorded from 17 healthy males who watched a silent movie while ignoring prerecorded self-generated (SGV) and nonself (NSV) voice stimuli, consisting of a vocalization (vocalization category condition: VCC) or of a disyllabic word (word category condition: WCC). All voice stimuli were presented as standard and deviant events in four distinct oddball sequences. The mismatch negativity (MMN) ERP component peaked earlier for NSV than for SGV stimuli. Moreover, when compared with SGV stimuli, the P3a amplitude was increased for NSV stimuli in the VCC only, whereas in the WCC no significant differences were found between the two voice types. These findings suggest differences in the time course of automatic detection of a change in voice identity. In addition, they suggest that stimulus complexity modulates the magnitude of the orienting response to SGV and NSV stimuli, extending previous findings on self-voice processing.This work was supported by Grant Numbers IF/00334/2012, PTDC/PSI-PCL/116626/2010, and PTDC/MHN-PCN/3606/2012, funded by the Fundacao para a Ciencia e a Tecnologia (FCT, Portugal) and the Fundo Europeu de Desenvolvimento Regional through the European programs Quadro de Referencia Estrategico Nacional and Programa Operacional Factores de Competitividade, awarded to A.P.P., and by FCT Doctoral Grant Number SFRH/BD/77681/2011, awarded to T.C.info:eu-repo/semantics/publishedVersio

    Early Left-Hemispheric Dysfunction of Face Processing in Congenital Prosopagnosia: An MEG Study

    Get PDF
    Electrophysiological research has demonstrated the relevance to face processing of a negative deflection peaking around 170 ms, labelled accordingly as N170 in the electroencephalogram (EEG) and M170 in magnetoencephalography (MEG). The M170 was shown to be sensitive to the inversion of faces and to familiarity-two factors that are assumed to be crucial for congenital prosopagnosia. In order to locate the cognitive dysfunction and its neural correlates, we investigated the time course of neural activity in response to these manipulations.Seven individuals with congenital prosopagnosia and seven matched controls participated in the experiment. To explore brain activity with high accuracy in time, we recorded evoked magnetic fields (275 channel whole head MEG) while participants were looking at faces differing in familiarity (famous vs. unknown) and orientation (upright vs. inverted). The underlying neural sources were estimated by means of the least square minimum-norm-estimation (L2-MNE) approach.The behavioural data corroborate earlier findings on impaired configural processing in congenital prosopagnosia. For the M170, the overall results replicated earlier findings, with larger occipito-temporal brain responses to inverted than upright faces, and more right- than left-hemispheric activity. Compared to controls, participants with congenital prosopagnosia displayed a general decrease in brain activity, primarily over left occipitotemporal areas. This attenuation did not interact with familiarity or orientation.The study substantiates the finding of an early involvement of the left hemisphere in symptoms of prosopagnosia. This might be related to an efficient and overused featural processing strategy which serves as a compensation of impaired configural processing

    The Glasgow Voice Memory Test: Assessing the ability to memorize and recognize unfamiliar voices

    Get PDF
    One thousand one hundred and twenty subjects as well as a developmental phonagnosic subject (KH) along with age-matched controls performed the Glasgow Voice Memory Test, which assesses the ability to encode and immediately recognize, through an old/new judgment, both unfamiliar voices (delivered as vowels, making language requirements minimal) and bell sounds. The inclusion of non-vocal stimuli allows the detection of significant dissociations between the two categories (vocal vs. non-vocal stimuli). The distributions of accuracy and sensitivity scores (d’) reflected a wide range of individual differences in voice recognition performance in the population. As expected, KH showed a dissociation between the recognition of voices and bell sounds, her performance being significantly poorer than matched controls for voices but not for bells. By providing normative data of a large sample and by testing a developmental phonagnosic subject, we demonstrated that the Glasgow Voice Memory Test, available online and accessible fromall over the world, can be a valid screening tool (~5 min) for a preliminary detection of potential cases of phonagnosia and of “super recognizers” for voices

    Visual adaptation enhances action sound discrimination

    Get PDF
    Prolonged exposure, or adaptation, to a stimulus in one modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in one modality can bias perception in another modality. Here we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory or audiovisual hand actions enhanced discrimination between two subsequently presented hand action sounds. Discrimination was most enhanced when the visual action ‘matched’ the auditory action. In addition, prior adaptation to a visual, auditory or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation induced crossmodal enhancements cannot be explained by post-perceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli

    Electrophysiological evidence for an early processing of human voices

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Previous electrophysiological studies have identified a "voice specific response" (VSR) peaking around 320 ms after stimulus onset, a latency markedly longer than the 70 ms needed to discriminate living from non-living sound sources and the 150 ms to 200 ms needed for the processing of voice paralinguistic qualities. In the present study, we investigated whether an early electrophysiological difference between voice and non-voice stimuli could be observed.</p> <p>Results</p> <p>ERPs were recorded from 32 healthy volunteers who listened to 200 ms long stimuli from three sound categories - voices, bird songs and environmental sounds - whilst performing a pure-tone detection task. ERP analyses revealed voice/non-voice amplitude differences emerging as early as 164 ms post stimulus onset and peaking around 200 ms on fronto-temporal (positivity) and occipital (negativity) electrodes.</p> <p>Conclusion</p> <p>Our electrophysiological results suggest a rapid brain discrimination of sounds of voice, termed the "fronto-temporal positivity to voices" (FTPV), at latencies comparable to the well-known face-preferential N170.</p
    corecore