796 research outputs found

    Melodic contour identification and speech recognition by school-aged children

    Get PDF
    Using the Sung Speech Corpus (SSC), which encompasses a single database that contains musical pitch, timbre variations and speech information in identification tasks, the current study aimed to explore the development of normal-hearing children’s ability to use the pitch and timbre cues. Thirteen normal hearing children were recruited for the study ages ranging from 7 to 16 years old. Participants were separated into two separate groups: Younger (7-9) and Older (10-16). Musical Experience was taken into account as well. The Angel Sound ™ program was utilized for testing which was adopted from previous studies, most recently Crew, Galvin, and Fu (2015). Participants were asked to identify either pitch contour or a five word sentence while the one not being identified was manipulated in quiet. Each sentence recognition task was also tested at three different SNRs (-3, 0, 3 dB). For sentence recognition in quiet, children with musical training performed better than those without. A significant interaction between Age-Group and Musical Experience was also seen, such that Younger children showed more benefit from musical training than Older, musically trained children. Significant effect of pitch contour on sentence recognition in noise was found showing that naturally produced speech stimuli were easier to identify when competing background noise was introduced for all children than speech stimuli with an unnatural pitch contour. Significant effect of speech timbre on MCI was found which demonstrates that as the timbre complexity increases, the MCI performance decreases. The current study concluded that pitch and timbre cues interfered with each other in child listeners, depending on the listening demands (SNR, tasks, etc.). Music training can improve overall speech and music perception

    Mandarin speech perception in combined electric and acoustic stimulation.

    Get PDF
    For deaf individuals with residual low-frequency acoustic hearing, combined use of a cochlear implant (CI) and hearing aid (HA) typically provides better speech understanding than with either device alone. Because of coarse spectral resolution, CIs do not provide fundamental frequency (F0) information that contributes to understanding of tonal languages such as Mandarin Chinese. The HA can provide good representation of F0 and, depending on the range of aided acoustic hearing, first and second formant (F1 and F2) information. In this study, Mandarin tone, vowel, and consonant recognition in quiet and noise was measured in 12 adult Mandarin-speaking bimodal listeners with the CI-only and with the CI+HA. Tone recognition was significantly better with the CI+HA in noise, but not in quiet. Vowel recognition was significantly better with the CI+HA in quiet, but not in noise. There was no significant difference in consonant recognition between the CI-only and the CI+HA in quiet or in noise. There was a wide range in bimodal benefit, with improvements often greater than 20 percentage points in some tests and conditions. The bimodal benefit was compared to CI subjects' HA-aided pure-tone average (PTA) thresholds between 250 and 2000 Hz; subjects were divided into two groups: "better" PTA (<50 dB HL) or "poorer" PTA (>50 dB HL). The bimodal benefit differed significantly between groups only for consonant recognition. The bimodal benefit for tone recognition in quiet was significantly correlated with CI experience, suggesting that bimodal CI users learn to better combine low-frequency spectro-temporal information from acoustic hearing with temporal envelope information from electric hearing. Given the small number of subjects in this study (n = 12), further research with Chinese bimodal listeners may provide more information regarding the contribution of acoustic and electric hearing to tonal language perception

    Prosodic detail in Neapolitan Italian

    Get PDF
    Recent findings on phonetic detail have been taken as supporting exemplar-based approaches to prosody. Through four experiments on both production and perception of both melodic and temporal detail in Neapolitan Italian, we show that prosodic detail is not incompatible with abstractionist approaches either. Specifically, we suggest that the exploration of prosodic detail leads to a refined understanding of the relationships between the richly specified and continuous varying phonetic information on one side, and coarse phonologically structured contrasts on the other, thus offering insights on how pragmatic information is conveyed by prosody

    Prosodic detail in Neapolitan Italian

    Get PDF
    Recent findings on phonetic detail have been taken as supporting exemplar-based approaches to prosody. Through four experiments on both production and perception of both melodic and temporal detail in Neapolitan Italian, we show that prosodic detail is not incompatible with abstractionist approaches either. Specifically, we suggest that the exploration of prosodic detail leads to a refined understanding of the relationships between the richly specified and continuous varying phonetic information on one side, and coarse phonologically structured contrasts on the other, thus offering insights on how pragmatic information is conveyed by prosody

    Prosodic detail in Neapolitan Italian

    Get PDF
    Recent findings on phonetic detail have been taken as supporting exemplar-based approaches to prosody. Through four experiments on both production and perception of both melodic and temporal detail in Neapolitan Italian, we show that prosodic detail is not incompatible with abstractionist approaches either. Specifically, we suggest that the exploration of prosodic detail leads to a refined understanding of the relationships between the richly specified and continuous varying phonetic information on one side, and coarse phonologically structured contrasts on the other, thus offering insights on how pragmatic information is conveyed by prosody

    Sensitivity to musical emotion is influenced by tonal structure in congenital amusia

    Get PDF
    Emotional communication in music depends on multiple attributes including psychoacoustic features and tonal system information, the latter of which is unique to music. The present study investigated whether congenital amusia, a lifelong disorder of musical processing, impacts sensitivity to musical emotion elicited by timbre and tonal system information. Twenty-six amusics and 26 matched controls made tension judgments on Western (familiar) and Indian (unfamiliar) melodies played on piano and sitar. Like controls, amusics used timbre cues to judge musical tension in Western and Indian melodies. While controls assigned significantly lower tension ratings to Western melodies compared to Indian melodies, thus showing a tonal familiarity effect on tension ratings, amusics provided comparable tension ratings for Western and Indian melodies on both timbres. Furthermore, amusics rated Western melodies as more tense compared to controls, as they relied less on tonality cues than controls in rating tension for Western melodies. The implications of these findings in terms of emotional responses to music are discussed

    Music and the brain: disorders of musical listening

    Get PDF
    The study of the brain bases for normal musical listening has advanced greatly in the last 30 years. The evidence from basic and clinical neuroscience suggests that listening to music involves many cognitive components with distinct brain substrates. Using patient cases reported in the literature, we develop an approach for understanding disordered musical listening that is based on the systematic assessment of the perceptual and cognitive analysis of music and its emotional effect. This approach can be applied both to acquired and congenital deficits of musical listening, and to aberrant listening in patients with musical hallucinations. Both the bases for normal musical listening and the clinical assessment of disorders now have a solid grounding in systems neuroscience

    From heuristics-based to data-driven audio melody extraction

    Get PDF
    The identification of the melody from a music recording is a relatively easy task for humans, but very challenging for computational systems. This task is known as "audio melody extraction", more formally defined as the automatic estimation of the pitch sequence of the melody directly from the audio signal of a polyphonic music recording. This thesis investigates the benefits of exploiting knowledge automatically derived from data for audio melody extraction, by combining digital signal processing and machine learning methods. We extend the scope of melody extraction research by working with a varied dataset and multiple definitions of melody. We first present an overview of the state of the art, and perform an evaluation focused on a novel symphonic music dataset. We then propose melody extraction methods based on a source-filter model and pitch contour characterisation and evaluate them on a wide range of music genres. Finally, we explore novel timbre, tonal and spatial features for contour characterisation, and propose a method for estimating multiple melodic lines. The combination of supervised and unsupervised approaches leads to advancements on melody extraction and shows a promising path for future research and applications
    • …
    corecore