26 research outputs found

    Preliminary Evidence of Pre-Attentive Distinctions of Frequency-Modulated Tones that Convey Affect

    Get PDF
    Recognizing emotion is an evolutionary imperative. An early stage of auditory scene analysis involves the perceptual grouping of acoustic features, which can be based on both temporal coincidence and spectral features such as perceived pitch. Perceived pitch, or fundamental frequency (F0), is an especially salient cue for differentiating affective intent through speech intonation (prosody). We hypothesized that: (1) simple frequency-modulated tone abstractions, based on the parameters of actual prosodic stimuli, would be reliably classified as representing differing emotional categories; and (2) that such differences would yield significant mismatch negativities (MMNs) – an index of pre-attentive deviance detection within the auditory environment. We constructed a set of FM tones that approximated the F0 mean and variation of reliably recognized happy and neutral prosodic stimuli. These stimuli were presented to 13 subjects using a passive listening oddball paradigm. We additionally included stimuli with no frequency modulation (FM) and FM tones with identical carrier frequencies but differing modulation depths as control conditions. Following electrophysiological recording, subjects were asked to identify the sounds they heard as happy, sad, angry, or neutral. We observed that FM tones abstracted from happy and no-expression speech stimuli elicited MMNs. Post hoc behavioral testing revealed that subjects reliably identified the FM tones in a consistent manner. Finally, we also observed that FM tones and no-FM tones elicited equivalent MMNs. MMNs to FM tones that differentiate affect suggests that these abstractions may be sufficient to characterize prosodic distinctions, and that these distinctions can be represented in pre-attentive auditory sensory memory

    “It's Not What You Say, But How You Say it”: A Reciprocal Temporo-frontal Network for Affective Prosody

    Get PDF
    Humans communicate emotion vocally by modulating acoustic cues such as pitch, intensity and voice quality. Research has documented how the relative presence or absence of such cues alters the likelihood of perceiving an emotion, but the neural underpinnings of acoustic cue-dependent emotion perception remain obscure. Using functional magnetic resonance imaging in 20 subjects we examined a reciprocal circuit consisting of superior temporal cortex, amygdala and inferior frontal gyrus that may underlie affective prosodic comprehension. Results showed that increased saliency of emotion-specific acoustic cues was associated with increased activation in superior temporal cortex [planum temporale (PT), posterior superior temporal gyrus (pSTG), and posterior superior middle gyrus (pMTG)] and amygdala, whereas decreased saliency of acoustic cues was associated with increased inferior frontal activity and temporo-frontal connectivity. These results suggest that sensory-integrative processing is facilitated when the acoustic signal is rich in affective information, yielding increased activation in temporal cortex and amygdala. Conversely, when the acoustic signal is ambiguous, greater evaluative processes are recruited, increasing activation in inferior frontal gyrus (IFG) and IFG STG connectivity. Auditory regions may thus integrate acoustic information with amygdala input to form emotion-specific representations, which are evaluated within inferior frontal regions

    Peripheral-Blood Stem Cells versus Bone Marrow from Unrelated Donors

    Get PDF
    BACKGROUND Randomized trials have shown that the transplantation of filgrastim-mobilized peripheral-blood stem cells from HLA-identical siblings accelerates engraftment but increases the risks of acute and chronic graft-versus-host disease (GVHD), as compared with the transplantation of bone marrow. Some studies have also shown that peripheral-blood stem cells are associated with a decreased rate of relapse and improved survival among recipients with high-risk leukemia. METHODS We conducted a phase 3, multicenter, randomized trial of transplantation of peripheral-blood stem cells versus bone marrow from unrelated donors to compare 2-year survival probabilities with the use of an intention-to-treat analysis. Between March 2004 and September 2009, we enrolled 551 patients at 48 centers. Patients were randomly assigned in a 1:1 ratio to peripheral-blood stem-cell or bone marrow transplantation, stratified according to transplantation center and disease risk. The median follow-up of surviving patients was 36 months (interquartile range, 30 to 37). RESULTS The overall survival rate at 2 years in the peripheral-blood group was 51% (95% confidence interval [CI], 45 to 57), as compared with 46% (95% CI, 40 to 52) in the bone marrow group (P=0.29), with an absolute difference of 5 percentage points (95% CI, −3 to 14). The overall incidence of graft failure in the peripheral-blood group was 3% (95% CI, 1 to 5), versus 9% (95% CI, 6 to 13) in the bone marrow group (P=0.002). The incidence of chronic GVHD at 2 years in the peripheral-blood group was 53% (95% CI, 45 to 61), as compared with 41% (95% CI, 34 to 48) in the bone marrow group (P=0.01). There were no significant between-group differences in the incidence of acute GVHD or relapse. CONCLUSIONS We did not detect significant survival differences between peripheral-blood stem-cell and bone marrow transplantation from unrelated donors. Exploratory analyses of secondary end points indicated that peripheral-blood stem cells may reduce the risk of graft failure, whereas bone marrow may reduce the risk of chronic GVHD. (Funded by the National Heart, Lung, and Blood Institute–National Cancer Institute and others; ClinicalTrials.gov number, NCT00075816.

    Are basic auditory processes involved in source-monitoring deficits in patients with schizophrenia?

    No full text
    International audiencePatients with schizophrenia (SZ) display deficits in both basic non-verbal auditory processing and source-monitoring of speech. To date, the contributions of basic auditory deficits to higher-order cognitive impairments, such as source-monitoring, and to clinical symptoms have yet to be elucidated. The aim of this study was to investigate the deficits and relationships between basic auditory functions, source-monitoring performances, and clinical symptom severity in SZ. Auditory processing of 4 psychoacoustic features (pitch, intensity, amplitude, length) and 2 types of source-monitoring (internal and reality monitoring) performances were assessed in 29 SZ and 29 healthy controls. Clinical symptoms were evaluated in patients with the Positive And Negative Syndrome Scale. Compared to the controls, SZ individuals in showed significant reductions in both global basic auditory processing (p < .0005, d = 1.16) and source-monitoring (p < .0005, d = 1.24) abilities. Both deficits correlated significantly in patients and across groups (all p < .05). Pitch processing skills were negatively correlated with positive symptom severity (r = -0.4, p < .05). A step-wise regression analysis showed that pitch discrimination was a significant predictor of source-monitoring performance. These results suggest that cognitive mechanisms associated with the discrimination of basic auditory features are most compromised in patients with source-monitoring disability. Basic auditory processing may index pathophysiological processes that are critical for optimal source-monitoring in schizophrenia and that are involved in positive symptoms
    corecore