87 research outputs found
Building Categories to Guide Behavior: How Humans Build and Use Auditory Category Knowledge Throughout the Lifespan
Although categorization has been studied in depth throughout development in the visual domain (e.g., Gelman & Meyer, 2011; Sloutsky 2010), there is little evidence examining how children and adults categorize everyday auditory objects (e.g., dog barks, trains, song, speech) or how category knowledge affects the way children and adults listen to these sounds during development. In two separate studies, I examined how listeners of all ages differentiated the multidimensional acoustic categories of speech and song and I determined whether listeners used category knowledge to process the sounds they encounter every day. In Experiment 1, listeners of all ages were able to categorize speech and song and categorization ability increased with age. Four- and 6-year-olds were more susceptible to the musical acoustic characteristics of ambiguous speech excerpts than 8-year-olds and adults, but all ages relied on F0 stability and average syllable duration to differentiate speech and song. Finally, 4-year-olds that were better at categorizing speech and song also had higher vocabulary scores, providing some of the first evidence that the ability to categorize speech and song may have cascading benefits for language development. Experiment 2 demonstrated the first evidence that listeners of all ages have change deafness. However, change deafness did not differ with age, even though overall sensitivity for detecting changes increased with age. Children and adults had more error for within-category changes compared to small acoustic changes, suggesting that all ages relied heavily on semantic category knowledge when detecting changes in complex scenes. These studies highlight the different roles that acoustic and semantic factors play when listeners are categorizing sounds compared to when they are using their knowledge to process sounds in complex scenes
The Role of Music-Specific Representations When Processing Speech: Using a Musical Illusion to Elucidate Domain-Specific and -General Processes
When listening to music and language sounds, it is unclear whether adults recruit domain-specific or domain-general mechanisms to make sense of incoming sounds. Unique acoustic characteristics such as a greater reliance on rapid temporal transitions in speech relative to song may introduce misleading interpretations concerning shared and overlapping processes in the brain. By using a stimulus that is both ecologically valid and can be perceived as speech or song depending on context, the contribution of low- and high-level mechanisms may be teased apart. The stimuli employed in all experiments are auditory illusions from speech to song reported by Deutsch et al. (2003, 2011) and Tierney et al. (2012). The current experiments found that 1) non-musicians also perceive the speech-to-song illusion and experience a similar disruption of the transformation as a result of pitch transpositions. 2) The contribution of rhythmic regularity to the perceptual transformation from speech to song is unclear using several different examples of the auditory illusion, and clear order effects occur because of the within-subjects design. And finally, 3) when comparing pitch change sensitivity in a speech mode of listening and, after several repetitions, a song mode of listening, only a song mode indicated the recruitment of music-specific representations. Together these studies indicate the potential for using the auditory illusion from speech to song in future research. Also, the final experiment tentatively demonstrates a behavioral dissociation between the recruitment of mechanisms unique to musical knowledge and mechanisms unique to the processing acoustic characteristics predominant in speech or song because acoustic characteristics were held constant
Music as a scaffold for listening to speech: Better neural phase-locking to song than speech
© 2020 The Authors Neural activity synchronizes with the rhythmic input of many environmental signals, but the capacity of neural activity to entrain to the slow rhythms of speech is particularly important for successful communication. Compared to speech, song has greater rhythmic regularity, a more stable fundamental frequency, discrete pitch movements, and a metrical structure, this may provide a temporal framework that helps listeners neurally track information better than the rhythmically irregular rhythms of speech. The current study used EEG to examine whether entrainment to the syllable rate of linguistic utterances, as indexed by cerebro-acoustic phase coherence, was greater when listeners heard sung than spoken sentences. We assessed listeners phase-locking in both easy (no time compression) and hard (50% time-compression) utterance conditions. Adults phase-locked equally well to speech and song in the easy listening condition. However, in the time-compressed condition, phase-locking was greater for sung than spoken utterances in the theta band (3.67–5 Hz). Thus, the musical temporal and spectral characteristics of song related to better phase-locking to the slow phrasal and syllable information (4–7 Hz) in the speech stream. These results highlight the possibility of using song as a tool for improving speech processing in individuals with language processing deficits, such as dyslexia
Familiarity modulates neural tracking of sung and spoken utterances
Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. When participant's subjective ratings of perceived familiarity during the MEG testing session were used to group stimuli, however, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants’ neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context
Familiarity modulates neural tracking of sung and spoken utterances
Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant\u27s subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants’ neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context
Linking prenatal experience to the emerging musical mind
The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one’s culture begins already within the mother’s womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind
Patient-Reported Toxicity and Quality-of-Life Profiles in Patients With Head and Neck Cancer Treated With Definitive Radiation Therapy or Chemoradiation
Purpose: Radiation therapy is an effective but burdensome treatment for head and neck cancer (HNC). We aimed to characterize the severity and time pattern of patient-reported symptoms and quality of life in a large cohort of patients with HNC treated with definitive radiation therapy, with or without systemic treatment. Methods and Materials: A total of 859 patients with HNC treated between 2007 and 2017 prospectively completed the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire-Head and Neck Cancer module (QLQ-HN35) and Core Quality of Life Questionnaire (QLQ-C30) at regular intervals during and after treatment for up to 5 years. Patients were classified into 3 subgroups: early larynx cancer, infrahyoideal cancer, and suprahyoideal cancer. Outcome scales of both questionnaires were quantified per subgroup and time point by means of average scores and the frequency distribution of categorized severity (none, mild, moderate, and severe). Time patterns and symptom severity were characterized. Toxicity profiles were compared using linear mixed model analysis. Additional toxicity profiles based on age, human papillomavirus status, treatment modality, smoking status, tumor site, and treatment period were characterized as well. Results: The study population consisted of 157 patients with early larynx cancer, 304 with infrahyoideal cancer, and 398 with suprahyoideal cancer. The overall questionnaire response rate was 83%. Generally, the EORTC QLQ-HN35 symptoms reported showed a clear time pattern, with increasing scores during treatment followed by a gradual recovery in the first 2 years. Distinct toxicity profiles were seen across subgroups (P < .001), with generally less severe symptom scores in the early larynx subgroup. The EORTC QLQ-C30 functioning, quality-of-life, and general symptoms reported showed a less evi- dent time pattern and less pronounced differences in mean scores between subgroups, although differences were still signifi- cant (P < .001). Differences in mean scores were most pronounced for role functioning, appetite loss, fatigue, and pain. Conclusions: We established patient-reported toxicity and quality-of-life profiles that showed different patterns for 3 sub-groups of patients with HNC. These profiles provide detailed information on the severity and persistence of various symptoms as experienced by patients during and after definitive radiation therapy. These profiles can be used to inform treatment of future patients and may serve as a benchmark for future studies. (C) 2021 The Authors. Published by Elsevier Inc
Monitoring the initial pulmonary absorption of two different beclomethasone dipropionate aerosols employing a human lung reperfusion model
BACKGROUND: The pulmonary residence time of inhaled glucocorticoids as well as their rate and extend of absorption into systemic circulation are important facets of their efficacy-safety profile. We evaluated a novel approach to elucidate the pulmonary absorption of an inhaled glucocorticoid. Our objective was to monitor and compare the combined process of drug particle dissolution, pro-drug activation and time course of initial distribution from human lung tissue into plasma for two different glucocorticoid formulations. METHODS: We chose beclomethasone dipropionate (BDP) delivered by two different commercially available HFA-propelled metered dose inhalers (Sanasthmax(®)/Becloforte™ and Ventolair(®)/Qvar™). Initially we developed a simple dialysis model to assess the transfer of BDP and its active metabolite from human lung homogenate into human plasma. In a novel experimental setting we then administered the aerosols into the bronchus of an extracorporally ventilated and reperfused human lung lobe and monitored the concentrations of BDP and its metabolites in the reperfusion fluid. RESULTS: Unexpectedly, we observed differences between the two aerosol formulations Sanasthmax(®)/Becloforte™ and Ventolair(®)/Qvar™ in both the dialysis as well as in the human reperfusion model. The HFA-BDP formulated as Ventolair(®)/Qvar™ displayed a more rapid release from lung tissue compared to Sanasthmax(®)/Becloforte™. We succeeded to explain and illustrate the observed differences between the two aerosols with their unique particle topology and divergent dissolution behaviour in human bronchial fluid. CONCLUSION: We conclude that though the ultrafine particles of Ventolair(®)/Qvar™ are beneficial for high lung deposition, they also yield a less desired more rapid systemic drug delivery. While the differences between Sanasthmax(®)/Becloforte™ and Ventolair(®)/Qvar™ were obvious in both the dialysis and lung perfusion experiments, the latter allowed to record time courses of pro-drug activation and distribution that were more consistent with results of comparable clinical trials. Thus, the extracorporally reperfused and ventilated human lung is a highly valuable physiological model to explore the lung pharmacokinetics of inhaled drugs
Myoblast sensitivity and fibroblast insensitivity to osteogenic conversion by BMP-2 correlates with the expression of Bmpr-1a
<p>Abstract</p> <p>Background</p> <p>Osteoblasts are considered to primarily arise from osseous progenitors within the periosteum or bone marrow. We have speculated that cells from local soft tissues may also take on an osteogenic phenotype. Myoblasts are known to adopt a bone gene program upon treatment with the osteogenic bone morphogenetic proteins (BMP-2,-4,-6,-7,-9), but their osteogenic capacity relative to other progenitor types is unclear. We further hypothesized that the sensitivity of cells to BMP-2 would correlate with BMP receptor expression.</p> <p>Methods</p> <p>We directly compared the BMP-2 sensitivity of myoblastic murine cell lines and primary cells with osteoprogenitors from osseous tissues and fibroblasts. Fibroblasts forced to undergo myogenic conversion by transduction with a MyoD-expressing lentiviral vector (LV-MyoD) were also examined. Outcome measures included alkaline phosphatase expression, matrix mineralization, and expression of osteogenic genes <it>(alkaline phosphatase, osteocalcin </it>and <it>bone morphogenetic protein receptor-1A) </it>as measured by quantitative PCR.</p> <p>Results</p> <p>BMP-2 induced a rapid and robust osteogenic response in myoblasts and osteoprogenitors, but not in fibroblasts. Myoblasts and osteoprogenitors grown in osteogenic media rapidly upregulated <it>Bmpr-1a </it>expression. Chronic BMP-2 treatment resulted in peak <it>Bmpr-1a </it>expression at day 6 before declining, suggestive of a negative feedback mechanism. In contrast, fibroblasts expressed low levels of <it>Bmpr-1a </it>that was only weakly up-regulated by BMP-2 treatment. Bioinformatics analysis confirmed the presence of myogenic responsive elements in the proximal promoter region of human and murine <it>BMPR-1A/Bmpr-1a</it>. Forced myogenic gene expression in fibroblasts was associated with a significant increase in <it>Bmpr-1a </it>expression and a synergistic increase in the osteogenic response to BMP-2.</p> <p>Conclusion</p> <p>These data demonstrate the osteogenic sensitivity of muscle progenitors and provide a mechanistic insight into the variable response of different cell lineages to BMP-2.</p
- …