913 research outputs found

    Dominance attributions following damage to the ventromedial prefrontal cortex

    Get PDF
    Damage to the human ventromedial prefrontal cortex (VM) can result in dramatic and maladaptive changes in social behavior despite preservation of most other cognitive abilities. One important aspect of social cognition is the ability to detect social dominance, a process of attributing from particular social signals another person's relative standing in the social world. To test the role of the VM in making attributions of social dominance, we designed two experiments: one requiring dominance judgments from static pictures of faces, the second requiring dominance judgments from film clips. We tested three demographically matched groups of subjects: subjects with focal lesions in the VM (n=15), brain-damaged comparison subjects with lesions excluding the VM (n=11), and a reference group of normal individuals with no history of neurological disease (n=32). Contrary to our expectation, we found that subjects with VM lesions gave dominance judgments on both tasks that did not differ significantly from those given by the other groups. Despite their grossly normal performance, however, subjects with VM lesions showed more subtle impairments specifically when judging static faces: They were less discriminative in their dominance judgments, and did not appear to make normal use of gender and age of the faces in forming their judgments. The findings suggest that, in the laboratory tasks we used, damage to the VM does not necessarily impair judgments of social dominance, although it appears to result in alterations in strategy that might translate into behavioral impairments in real life

    Crowdsourcing Formulaic Phrases: towards a new type of spoken corpus

    Get PDF
    Corpora have revolutionised the way we describe and analyse language in use. The sheer scale of collections of texts, along with the appropriate software for structuring and analysing this data, has led to a fuller understanding of the characteristics of language use in context. However, the development of corpora has been unbalanced. The assembly of collections of written texts is relatively straightforward, and as a result, the field has a number of very large corpora which focus on mainly written texts, although often with some spoken elements included, e.g. the COCA (520m words), GloWbE (1.9b), and the enTenTen (19b). In addition, a number of corpora now include samples of language used in social media and other web contexts alongside more traditional written and (transcribed) spoken language samples, e.g. the Open American National Corpus (planned corpus size 100m words, mirroring the British National Corpus). Conversely, the development of spoken corpora has lagged behind, mainly due to the time-consuming nature of recording and transcribing spoken content. Most of the spoken corpora that exist consist of material that is easily gathered by automated collection software, such as radio talk show and television news transcripts and other entertainment programming (e.g. the spoken elements of COCA). The nature of this spoken discourse is described as unscripted, however, it is certainly constrained, e.g. talk show radio has certain expectations about how the host will moderate the discussion. While the scripted/constrained oral content in these spoken corpora has proved informative in terns of the nature of spoken discourse (see Adolphs and Carter, 2013; Raso and Mello, 2014; Aijmer, 2002 and Carter and McCarthy, 1999 for notable studies), it is no substitute for spontaneous, unscripted oral discourse. Furthermore, even the automated collection of scripted/constrained spoken discourse has not yet enabled the development of large spoken corpora of a size comparable to the largest written corpora (e.g. the spoken component of the 100m British National Corpus is only 10m words with a further 10m words added in the new spoken BNC2014). The 10m word subcorpus of the BNC contains 4m words of spontaneous speech, and is controlled for a number of sociolinguistic and contextual variables. There are a number of smaller spoken corpora available, e.g. the Michigan Corpus of Academic Spoken English (MICASE) which at just under 2m words is both modest in size and quite specialised in content. This trend is reflected in other corpora of spoken discourse. Spontaneous spoken discourse forms a large part of everyday language use, and the development of larger and more representative corpora of spontaneous oral language is therefore desirable to inform linguistic description. The main constraint to this ambition has always been the time-consuming nature and financial cost related to the compilation of such corpora. Spoken corpora provide a unique resource for the exploration of how people interact in real-life communicative contexts. Depending on how spoken corpora are annotated (as discussed below), they present opportunities for examining patterns in, for example, spoken lexis and grammar, pragmatics, dialect and language variation. Spoken corpora are now used in a variety of different fields from translation to reference and grammar works, to studies of language change. The need for spontaneous unscripted corpora seems uncontroversial, however, compiling such corpora in the traditional way remains a formidable task. Advances have been made in other areas utilizing the power of people volunteering information about what they think and do. This approach is often referred to as crowdsourcing, and it holds the promise to both overcome some of the difficulties outlined above, and to add useful aspects to corpus compilation which traditional methods cannot offer.This paper thus explores a new approach to collecting samples of naturally occurring spoken language samples, which may allow researchers to take advantage of the burgeoning area of information crowdsourcing. Instead of relying on the typical recording and transcribing of spoken discourse, crowdsourcing may allow the collection of real-time data ‘in the wild’ by having participants report the language they hear around them. Specifically, we aim to investigate the level of precision and recall of the ‘crowd’ when it comes to reporting language they have heard in real certain contexts, alongside the use of a crowdsourcing toolkit to facilitate this task. This method of ‘reporting’ usage does come with its own issues of course, many of which have been highlighted in the literature on Discourse Completion Tasks (Schauer and Adolphs, 2006), and can merely be regarded as a proxy for usage. Investigating user memory in this context can therefore only be regarded as a first step in assessing the overall viability of the proposed approach to collecting language samples. As a focusing device for selection of reported language samples, we draw on the use of formulaic phrases, an area that have received considerable attention from different areas in applied linguistics

    The Human Amygdala and the Induction and Experience of Fear

    Get PDF
    Although clinical observations suggest that humans with amygdala damage have abnormal fear reactions and a reduced experience of fear [1-3], these impressions have not been systematically investigated. To address this gap, we conducted a new study in a rare human patient, SM, who has focal bilateral amygdala lesions [4]. To provoke fear in SM, we exposed her to live snakes and spiders, took her on a tour of a haunted house, and showed her emotionally evocative films. On no occasion did SM exhibit fear, and she never endorsed feeling more than minimal levels of fear. Likewise, across a large battery of self-report questionnaires, 3 months of real-life experience sampling, and a life history replete with traumatic events, SM repeatedly demonstrated an absence of overt fear manifestations and an overall impoverished experience of fear. Despite her lack of fear, SM is able to exhibit other basic emotions and experience the respective feelings. The findings support the conclusion that the human amygdala plays a pivotal role in triggering a state of fear and that the absence of such a state precludes the experience of fear itself

    Corpus-assisted literary evaluation

    Get PDF
    Fleur Adcock’s poem, Street Song, is evaluated by the stylistician, Roger Fowler, as ‘dynamic and disturbing’. I agree with his literary evaluation. These unsettling effects take place in initial response to the poem, effects which attract me into the work. In other words, they are experienced before proper reflection and analysis of the poem and individual interpretation of it. Implicit within Fowler’s evaluation is that this is likely to apply for readers generally. The purpose of this article is to show how empirical corpus evidence can usefully provide substantiation of such initial evaluations of literary works, showing whether or not they are likely to be stereotypically experienced by readers. In drawing on both schema theory and corpus analysis to achieve this, the article makes links between cognitive stylistic and corpus stylistic foci

    Panic Anxiety in Humans with Bilateral Amygdala Lesions: Pharmacological Induction via Cardiorespiratory Interoceptive Pathways

    Get PDF
    We previously demonstrated that carbon dioxide inhalation could induce panic anxiety in a group of rare lesion patients with focal bilateral amygdala damage. To further elucidate the amygdala-independent mechanisms leading to aversive emotional experiences, we retested two of these patients (B.G. and A.M.) to examine whether triggering palpitations and dyspnea via stimulation of non-chemosensory interoceptive channels would be sufficient to elicit panic anxiety. Participants rated their affective and sensory experiences following bolus infusions of either isoproterenol, a rapidly acting peripheral β-adrenergic agonist akin to adrenaline, or saline. Infusions were administered during two separate conditions: a panic induction and an assessment of cardiorespiratory interoception. Isoproterenol infusions induced anxiety in both patients, and full-blown panic in one (patient B.G.). Although both patients demonstrated signs of diminished awareness for cardiac sensation, patient A.M., who did not panic, reported a complete lack of awareness for dyspnea, suggestive of impaired respiratory interoception. These findings indicate that the amygdala may play a role in dynamically detecting changes in cardiorespiratory sensation. The induction of panic anxiety provides further evidence that the amygdala is not required for the conscious experience of fear induced via interoceptive sensory channels

    Subthalamic nucleus stimulation affects orbitofrontal cortex in facial emotion recognition: a pet study

    Get PDF
    Deep brain stimulation (DBS) of the bilateral subthalamic nucleus (STN) in Parkinson's disease is thought to produce adverse events such as emotional disorders, and in a recent study, we found fear recognition to be impaired as a result. These changes have been attributed to disturbance of the STN's limbic territory and would appear to confirm that the negative emotion recognition network passes through the STN. In addition, it is now widely acknowledged that damage to the orbitofrontal cortex (OFC), especially the right side, can result in impaired recognition of facial emotions (RFE). In this context, we hypothesized that this reduced recognition of fear is correlated with modifications in the cerebral glucose metabolism of the right OFC. The objective of the present study was first, to reinforce our previous results by demonstrating reduced fear recognition in our Parkinson's disease patient group following STN DBS and, second, to correlate these emotional performances with glucose metabolism using 18FDG-PET. The 18FDG-PET and RFE tasks were both performed by a cohort of 13 Parkinson's disease patients 3 months before and 3 months after surgery for STN DBS. As predicted, we observed a significant reduction in fear recognition following surgery and obtained a positive correlation between these neuropsychological results and changes in glucose metabolism, especially in the right OFC. These results confirm the role of the STN as a key basal ganglia structure in limbic circuits

    Integration Between Cerebral Hemispheres Contributes to Defense Mechanisms

    Get PDF
    Defense mechanisms are mental functions which facilitate coping when real or imagined events challenge personal wishes, needs, and feelings. Whether defense mechanisms have a specific neural basis is unknown. The present research tested the hypothesis that interhemispheric integration plays a critical role in defense mechanism development, by studying a unique sample of patients born without the corpus callosum (agenesis of the corpus callosum; AgCC). Adults with AgCC (N = 27) and matched healthy volunteers (N = 30) were compared on defense mechanism use across increasing levels of developmental maturity (denial, least; projection, intermediate; identification, most). Narratives generated in response to Thematic Apperception Test images were scored according to the Defense Mechanism Manual. Greater use of denial and less identification was found in persons with AgCC, compared to healthy comparisons. This difference emerged after age 18 when full maturation of defenses among healthy individuals was expected. The findings provide clinically important characterization of social and emotional processing in persons with AgCC. More broadly, the results support the hypothesis that functional integration across the hemispheres is important for the development of defense mechanisms

    The gray matter volume of the amygdala is correlated with the perception of melodic intervals: a voxel-based morphometry study

    Get PDF
    Music is not simply a series of organized pitches, rhythms, and timbres, it is capable of evoking emotions. In the present study, voxel-based morphometry (VBM) was employed to explore the neural basis that may link music to emotion. To do this, we identified the neuroanatomical correlates of the ability to extract pitch interval size in a music segment (i.e., interval perception) in a large population of healthy young adults (N = 264). Behaviorally, we found that interval perception was correlated with daily emotional experiences, indicating the intrinsic link between music and emotion. Neurally, and as expected, we found that interval perception was positively correlated with the gray matter volume (GMV) of the bilateral temporal cortex. More important, a larger GMV of the bilateral amygdala was associated with better interval perception, suggesting that the amygdala, which is the neural substrate of emotional processing, is also involved in music processing. In sum, our study provides one of first neuroanatomical evidence on the association between the amygdala and music, which contributes to our understanding of exactly how music evokes emotional responses

    Emotional arousal in agenesis of the corpus callosum

    Get PDF
    While the processing of verbal and psychophysiological indices of emotional arousal have been investigated extensively in relation to the left and right cerebral hemispheres, it remains poorly understood how both hemispheres normally function together to generate emotional responses to stimuli. Drawing on a unique sample of nine high-functioning subjects with complete agenesis of the corpus callosum (AgCC), we investigated this issue using standardized emotional visual stimuli. Compared to healthy controls, subjects with AgCC showed a larger variance in their cognitive ratings of valence and arousal, and an insensitivity to the emotion category of the stimuli, especially for negatively-valenced stimuli, and especially for their arousal. Despite their impaired cognitive ratings of arousal, some subjects with AgCC showed large skin-conductance responses, and in general skin-conductance responses discriminated emotion categories and correlated with stimulus arousal ratings. We suggest that largely intact right hemisphere mechanisms can support psychophysiological emotional responses, but that the lack of interhemispheric communication between the hemispheres, perhaps together with dysfunction of the anterior cingulate cortex, interferes with normal verbal ratings of arousal, a mechanism in line with some models of alexithymia

    The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression

    Get PDF
    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information
    corecore