177 research outputs found

    Effect of vasopressin 1b receptor blockade on the hypothalamic-pituitary-adrenal response of chronically stressed rats to a heterotypic stressor

    Get PDF
    Exposure to chronic restraint (CR) modifies the hypothalamic–pituitary–adrenal (HPA) axis response to subsequent acute stressors with adaptation of the response to a homotypic and sensitization of the response to a heterotypic stressor. Since vasopressin (AVP) activity has been reported to change during chronic stress, we investigated whether this was an important factor in HPA facilitation. We therefore tested whether vasopressin 1b receptor (AVPR1B) blockade altered the ACTH and corticosterone response to heterotypic stressors following CR stress. Adult male rats were exposed to CR, single restraint, or were left undisturbed in the home cage. Twenty-four hours after the last restraint, rats were injected with either a AVPR1B antagonist (Org, 30 mg/kg, s.c.) or vehicle (5% mulgofen in saline, 0.2/kg, s.c.) and then exposed to either restraint, lipopolysaccharide (LPS) or white noise. CR resulted in the adaptation of the ACTH and corticosterone response to restraint and this effect was not prevented by pretreatment with Org. Although we found no effect of CR on LPS-induced ACTH and corticosterone secretion, both repeated and single episodes of restraint induced the sensitization of the ACTH, but not corticosterone response to acute noise. Pretreatment with Org reduced the exaggerated ACTH response to noise after both single and repeated exposure to restraint

    A randomized, double-blind, placebo-controlled, crossover study to assess the immediate effect of sublingual glyceryl trinitrate on the ankle brachial pressure index, claudication, and maximum walking distance of patients with intermittent claudication

    Get PDF
    AbstractPurpose:The goal of the present study was to assess the immediate effect of sublingual glyceryl trinitrate (GTN) in patients with intermittent claudication. Methods: We conducted a randomized, double-blind, placebo-controlled crossover study. Inclusion criteria consisted of history of intermittent claudication, resting ankle brachial pressure index (ABPI) of 1.00 or less, a 20% or greater fall in ABPI after exercise, and maximum walking distance (MWD) of less than 250 m. Patients already receiving nitrates were excluded. In study 1, patients (n = 25) underwent a standard exercise test after randomization to receive either 800 μg of sublingual GTN or placebo. The postexercise ABPI was recorded. Then, the crossover portion of the study was performed. In study 2, patients (n = 22) had their claudication distance and MWD measured. They then were randomized to receive either GTN or placebo spray, and the exercise test was repeated, with the claudication distance and MWD recorded, followed by the crossover portion of the study. Statistical analysis was performed with the Wilcoxon matched pairs signed ranks test and the Mann-Whitney U test. Results:In study 1, the median postexercise ABPIs for placebo and GTN were 0.29 and 0.36 (P = .0001). In study 2, the median claudication distance for both placebo and GTN groups was 70 m (P = .59). The median MWD for the placebo and GTN groups was 105 and 125 m (P = .0084) Conclusion: GTN can decrease the fall in ABPI after exercise and increase the MWD. (J Vasc Surg 1998;28:895-900.

    Motor excitability during visual perception of known and unknown spoken languages

    Get PDF
    It is possible to comprehend speech and discriminate languages by viewing a speaker’s articulatory movements. Transcranial magnetic stimulation studies have shown that viewing speech enhances excitability in the articulatory motor cortex. Here, we investigated the specificity of this enhanced motor excitability in native and non-native speakers of English. Both groups were able to discriminate between speech movements related to a known (i.e., English) and unknown (i.e., Hebrew) language. The motor excitability was higher during observation of a known language than an unknown language or non-speech mouth movements, suggesting that motor resonance is enhanced specifically during observation of mouth movements that convey linguistic information. Surprisingly, however, the excitability was equally high during observation of a static face. Moreover, the motor excitability did not differ between native and non-native speakers. These findings suggest that the articulatory motor cortex processes several kinds of visual cues during speech communication

    Effect of the glucocorticoid receptor antagonist Org 34850 on fast and delayed feedback of corticosterone release

    Get PDF
    We investigated the effect of the glucocorticoid receptor (GR) antagonist Org 34850 on fast and delayed inhibition of corticosterone secretion in response to the synthetic glucocorticoid methylprednisolone (MPL). Male rats were implanted with a catheter in the right jugular vein, for blood sampling and MPL administration, and with an s.c. cannula for Org 34850 administration. All experiments were conducted at the diurnal hormonal peak in the late afternoon. Rats were connected to an automated sampling system and blood samples were collected every 5 or 10 min. Org 34850 (10 mg/kg, s.c.) or vehicle (5% mulgofen in saline) was injected at 1630 h; 30 min later, rats received an injection of MPL (500 μg/rat, i.v.) or saline (0.1 ml/rat). We found that an acute administration of MPL rapidly decreased the basal corticosterone secretion and this effect was not prevented by acute pretreatment with Org 34850. However, blockade of GR with Org 34850 prevented delayed inhibition of MPL on corticosterone secretion measured between 4 and 12 h after MPL administration. Our data suggest an involvement of GR in modulating delayed, but not fast, inhibition induced by MPL on basal corticosterone secretion

    Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success.

    Get PDF
    Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections-including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant-however that may be achieved, and whatever the success of auditory restoration

    Explicit Processing Demands Reveal Language Modality-Specific Organization of Working Memory

    Get PDF
    The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. This prediction was tested in three working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS), or Hearing Nonsigners (HN). Easily nameable pictures were used as stimuli to avoid confounds relating to sensory modality. Performance was largely similar for DS, HS, and HN, suggesting that previously identified intermodal differences may be due to differences in retention of sensory information. When explicit processing demands were high, differences emerged between DS and HN, suggesting that although working memory storage in both groups is sensitive to temporal organization, retrieval is not sensitive to temporal organization in DS. A general effect of semantic similarity was also found. These findings are discussed in relation to the ELU model

    How does visual language affect crossmodal plasticity and cochlear implant success?

    Get PDF
    Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex are compromised by crossmodal plasticity. This has been argued to result partly from the use of a visual language. Here we argue that 'cochlear implant sensitive periods' comprise both auditory and language sensitive periods, and thus cannot be fully described with animal models. Despite prevailing assumptions, there is no evidence to link the use of a visual language to poorer CI outcome. Crossmodal reorganisation of auditory cortex occurs regardless of compensatory strategies, such as sign language, used by the deaf person. In contrast, language deprivation during early sensitive periods has been repeatedly linked to poor language outcomes. Language sensitive periods have largely been ignored when considering variation in CI outcome, leading to ill-founded recommendations concerning visual language in CI habilitation

    The signer and the sign: Cortical correlates of person identity and language processing from point-light displays

    Get PDF
    In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. many of the regions activated by point-light under these conditions replicated those previously reported for full-image displays, including regions within the inferior temporal cortex that are specialised for face and body-part identification, although such body parts were invisible in the display. Right frontal regions were also recruited - a pattern not usually seen in full-image SL processing. This activation may reflect the recruitment of information about person identity from the reduced display. A direct comparison of identify-signer and identify-sign conditions showed these tasks relied to a different extent on the posterior inferior regions. Signer identification elicited greater activation than sign identification in (bilateral) inferior temporal gyri (BA 37/19), fusiform gyri (BA 37), middle and posterior portions of the middle temporal gyri (BAs 37 and 19), and superior temporal gyri (BA 22 and 42). Right inferior frontal cortex was a further focus of differential activation (signer > sign).These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (Von Kriegstein, Kleinschmidt, Sterzer, & Giraud, 2005). Inferior temporal regions associated with the visual recognition of a person appear to be required during SL processing, for both carrier and content information. Crown Copyright (C) 2011 Published by Elsevier Ltd. All rights reserved
    corecore