28 research outputs found

    Effects of combination of linguistic and musical pitch experience on subcortical pitch encoding

    Get PDF
    Musical experience and linguistic experience have been shown to facilitate language and music perception. However, the precise nature of music and language interaction is still a subject of ongoing research. In this study, using subcortical electrophysiological measures (frequency following response), we seek to understand the effect of interaction of linguistic pitch experience and musical pitch experience on subcortical lexical and musical pitch encoding. We compared musicians and non-musicians who were native speakers of a tone language on subcortical encoding of linguistic and musical pitch. We found that musicians and non-musicians did not differ on the brainstem encoding of lexical tones. However, musicians showed a more robust brainstem encoding of musical pitch as compared to non-musicians. These findings suggest that a combined musical and linguistic pitch experience affects auditory brainstem encoding of linguistic and musical pitch differentially. From our results, we could also speculate that native tone language speakers might use two different mechanisms, at least for the subcortical encoding of linguistic and musical pitch

    Congenital amusics use a secondary pitch mechanism to identify lexical tones

    Get PDF
    Amusia is a pitch perception disorder associated with deficits in processing and production of both musical and lexical tones, which previous reports have suggested may be constrained to fine-grained pitch judgements. In the present study speakers of tone-languages, in which lexical tones are used to convey meaning, identified words present in chimera stimuli containing conflicting pitch-cues in the temporal fine-structure and temporal envelope, and which therefore conveyed two distinct utterances. Amusics were found to be more likely than controls to judge the word according to the envelope pitch-cues. This demonstrates that amusia is not associated with fine-grained pitch judgements alone, and is consistent with there being two distinct pitch mechanisms and with amusics having an atypical reliance on a secondary mechanism based upon envelope cues

    Phase locked neural activity in the human brainstem predicts preference for musical consonance.

    Get PDF
    When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to a single harmonic series (the 'harmonicity' of the chord) predicts the perceived consonance (how pleasant and stable the chord sounds; McDermott, Lehr, & Oxenham, 2010). The distinction between consonance and dissonance is central to Western musical form. Harmonicity is represented in the temporal firing patterns of populations of brainstem neurons. The current study investigates the role of brainstem temporal coding of harmonicity in the perception of consonance. Individual preference for consonant over dissonant chords was measured using a rating scale for pairs of simultaneous notes. In order to investigate the effects of cochlear interactions, notes were presented in two ways: both notes to both ears or each note to different ears. The electrophysiological frequency following response (FFR), reflecting sustained neural activity in the brainstem synchronised to the stimulus, was also measured. When both notes were presented to both ears the perceptual distinction between consonant and dissonant chords was stronger than when the notes were presented to different ears. In the condition in which both notes were presented to the both ears additional low-frequency components, corresponding to difference tones resulting from nonlinear cochlear processing, were observable in the FFR effectively enhancing the neural harmonicity of consonant chords but not dissonant chords. Suppressing the cochlear envelope component of the FFR also suppressed the additional frequency components. This suggests that, in the case of consonant chords, difference tones generated by interactions between notes in the cochlea enhance the perception of consonance. Furthermore, individuals with a greater distinction between consonant and dissonant chords in the FFR to individual harmonics had a stronger preference for consonant over dissonant chords. Overall, the results provide compelling evidence for the role of neural temporal coding in the perception of consonance, and suggest that the representation of harmonicity in phase locked neural firing drives the perception of consonance

    Subcortical representation of musical dyads: individual differences and neural generators

    Get PDF
    When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness, or "consonance", of a dyad is likely driven by the harmonic relation of the frequency components of the combined spectrum of the two notes. Previous work has demonstrated a relation between individual preference for consonant over dissonant dyads, and the strength of neural temporal coding of the harmonicity of consonant relative to dissonant dyads as measured using the electrophysiological "frequency-following response" (FFR). However, this work also demonstrated that both these variables correlate strongly with musical experience. The current study was designed to determine whether the relation between consonance preference and neural temporal coding is maintained when controlling for musical experience. The results demonstrate that strength of neural coding of harmonicity is predictive of individual preference for consonance even for non-musicians. An additional purpose of the current study was to assess the cochlear generation site of the FFR to low-frequency dyads. By comparing the reduction in FFR strength when high-pass masking noise was added to the output of a model of the auditory periphery, the results provide evidence for the FFR to low-frequency dyads resulting in part from basal cochlear generators. (C) 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)

    Making Sense of Sounds: Category names

    No full text
    <table><tr> <td>Representation of descriptive words for each sound category for each sound type. Category names are indicated by an asterix.<br><br>doi: 10.3389/fpsyg.2018.01277<br></td></tr></table

    Losing the music:aging affects the perception and subcortical neural representation of musical harmony

    No full text
    When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological "frequency-following response." The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding

    Making Sense Of Sounds: Data for the machine learning challenge 2018

    No full text
    These are the datasets for the Making Sense Of Sounds challenge 2018. The Development download contains the following:<div><br></div><div>* Folder 'Development' with 1500 audio files, divided equally into five category subfolders: Music, Human, Nature, Effects, and Urban.</div><div><br></div><div>* File 'Logsheet_Development.csv, a listing of every file, plus their category, and event type. <br></div><div><br></div><div><br></div><div>The Evaluation download contains the following:</div><div><br></div><div>* Folder 'Evaluation' with 500 audio files, not further allocated to any subfolders.</div><div><br></div><div>* File 'Logsheet_Evaluation.csv', a listing of all files in the Evaluation dataset.</div><div><br></div><div><p>It should be assumed that all files in this challenge are provided under the licence CC-BY-NC 4.0 (Creative Commons, Attribution Noncommercial<sup>1</sup>). This is the most restrictive licence of any file in the dataset, though some were also provided under CC0<sup>2</sup> and CC-BY<sup>3</sup>. A complete listing of the exact licences and author attributions is contained in 'MSoS_challenge_2018_File_credits_Usage_info_v1-00.zip', </p></div><div><br></div><div>V4 is the latest version of this repository, making the file 'Logsheet_EvaluationMaster.csv' available for download. This matches the format of 'Logsheet_Development.csv' which accompanies the Development dataset. Together these two logsheet files make it possible to check both the main category and the sound event type for every file in the challenge dataset. <br></div><div><br></div><div>See the challenge home page for full details about the dataset and challenge results:</div><div><p><a href="http://cvssp.org/projects/making_sense_of_sounds/site/challenge">http://cvssp.org/projects/making_sense_of_sounds/site/challenge</a></p></div><div><br></div><div><br></div><div><br></div><div><u>References:</u></div><div>1) <a href="https://creativecommons.org/licenses/by-nc/4.0/">https://creativecommons.org/licenses/by-nc/4.0/</a></div><div>2) <a href="https://creativecommons.org/publicdomain/zero/1.0/">https://creativecommons.org/publicdomain/zero/1.0/</a> </div><div>3) <a href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</a></div><div><br></div

    Table_10_Sound Categories: Category Formation and Evidence-Based Taxonomies.XLSX

    No full text
    <p>Five evidence-based taxonomies of everyday sounds frequently reported in the soundscape literature have been generated. An online sorting and category-labeling method that elicits rather than prescribes descriptive words was used. A total of N = 242 participants took part. The main categories of the soundscape taxonomy were people, nature, and manmade, with each dividing into further categories. Sounds within the nature and manmade categories, and two further individual sound sources, dogs, and engines, were explored further by repeating the procedure using multiple exemplars. By generating multidimensional spaces containing both sounds and the spontaneously generated descriptive words the procedure allows for the interpretation of the psychological dimensions along which sounds are organized. This reveals how category formation is based upon different cues – sound source-event identification, subjective-states, and explicit assessment of the acoustic signal – in different contexts. At higher levels of the taxonomy the majority of words described sound source-events. In contrast, when categorizing dog sounds a greater proportion of the words described subjective-states, and valence and arousal scores of these words correlated with their coordinates along the first two dimensions of the data. This is consistent with valence and arousal judgments being the primary categorization strategy used for dog sounds. In contrast, when categorizing engine sounds a greater proportion of the words explicitly described the acoustic signal. The coordinates of sounds along the first two dimensions were found to correlate with fluctuation strength and sharpness, consistent with explicit assessment of acoustic signal features underlying category formation for engine sounds. By eliciting descriptive words the method makes explicit the subjective meaning of these judgments based upon valence and arousal and acoustic properties, and the results demonstrate distinct strategies being spontaneously used to categorize different types of sounds.</p

    Table_7_Sound Categories: Category Formation and Evidence-Based Taxonomies.XLSX

    No full text
    <p>Five evidence-based taxonomies of everyday sounds frequently reported in the soundscape literature have been generated. An online sorting and category-labeling method that elicits rather than prescribes descriptive words was used. A total of N = 242 participants took part. The main categories of the soundscape taxonomy were people, nature, and manmade, with each dividing into further categories. Sounds within the nature and manmade categories, and two further individual sound sources, dogs, and engines, were explored further by repeating the procedure using multiple exemplars. By generating multidimensional spaces containing both sounds and the spontaneously generated descriptive words the procedure allows for the interpretation of the psychological dimensions along which sounds are organized. This reveals how category formation is based upon different cues – sound source-event identification, subjective-states, and explicit assessment of the acoustic signal – in different contexts. At higher levels of the taxonomy the majority of words described sound source-events. In contrast, when categorizing dog sounds a greater proportion of the words described subjective-states, and valence and arousal scores of these words correlated with their coordinates along the first two dimensions of the data. This is consistent with valence and arousal judgments being the primary categorization strategy used for dog sounds. In contrast, when categorizing engine sounds a greater proportion of the words explicitly described the acoustic signal. The coordinates of sounds along the first two dimensions were found to correlate with fluctuation strength and sharpness, consistent with explicit assessment of acoustic signal features underlying category formation for engine sounds. By eliciting descriptive words the method makes explicit the subjective meaning of these judgments based upon valence and arousal and acoustic properties, and the results demonstrate distinct strategies being spontaneously used to categorize different types of sounds.</p
    corecore