179 research outputs found

    Unsupervised statistical learning underpins computational, behavioural, and neural manifestations of musical expectation

    Get PDF
    The ability to anticipate forthcoming events has clear evolutionary advantages, and predictive successes or failures often entail significant psychological and physiological consequences. In music perception, the confirmation and violation of expectations are critical to the communication of emotion and aesthetic effects of a composition. Neuroscientific research on musical expectations has focused on harmony. Although harmony is important in Western tonal styles, other musical traditions, emphasizing pitch and melody, have been rather neglected. In this study, we investigated melodic pitch expectations elicited by ecologically valid musical stimuli by drawing together computational, behavioural, and electrophysiological evidence. Unlike rule-based models, our computational model acquires knowledge through unsupervised statistical learning of sequential structure in music and uses this knowledge to estimate the conditional probability (and information content) of musical notes. Unlike previous behavioural paradigms that interrupt a stimulus, we devised a new paradigm for studying auditory expectation without compromising ecological validity. A strong negative correlation was found between the probability of notes predicted by our model and the subjectively perceived degree of expectedness. Our electrophysiological results showed that low-probability notes, as compared to high-probability notes, elicited a larger (i) negative ERP component at a late time period (400–450 ms), (ii) beta band (14–30 Hz) oscillation over the parietal lobe, and (iii) long-range phase synchronization between multiple brain regions. Altogether, the study demonstrated that statistical learning produces information-theoretic descriptions of musical notes that are proportional to their perceived expectedness and are associated with characteristic patterns of neural activity

    Predictive uncertainty in auditory sequence processing

    Get PDF
    Copyright © 2014 Hansen and Pearce. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms

    Uncertainty and Surprise Jointly Predict Musical Pleasure and Amygdala, Hippocampus, and Auditory Cortex Activity

    Get PDF
    Listening to music often evokes intense emotions [1, 2]. Recent research suggests that musical pleasure comes from positive reward prediction errors, which arise when what is heard proves to be better than expected [3]. Central to this view is the engagement of the nucleus accumbens—a brain region that processes reward expectations—to pleasurable music and surprising musical events [4, 5, 6, 7, 8]. However, expectancy violations along multiple musical dimensions (e.g., harmony and melody) have failed to implicate the nucleus accumbens [9, 10, 11], and it is unknown how music reward value is assigned [12]. Whether changes in musical expectancy elicit pleasure has thus remained elusive [11]. Here, we demonstrate that pleasure varies nonlinearly as a function of the listener’s uncertainty when anticipating a musical event, and the surprise it evokes when it deviates from expectations. Taking Western tonal harmony as a model of musical syntax, we used a machine-learning model [13] to mathematically quantify the uncertainty and surprise of 80,000 chords in US Billboard pop songs. Behaviorally, we found that chords elicited high pleasure ratings when they deviated substantially from what the listener had expected (low uncertainty, high surprise) or, conversely, when they conformed to expectations in an uninformative context (high uncertainty, low surprise). Neurally, we found using fMRI that activity in the amygdala, hippocampus, and auditory cortex reflected this interaction, while the nucleus accumbens only reflected uncertainty. These findings challenge current neurocognitive models of music-evoked pleasure and highlight the synergistic interplay between prospective and retrospective states of expectation in the musical experience

    Pupil responses to pitch deviants reflect predictability of melodic sequences

    Get PDF
    Humans automatically detect events that, in deviating from their expectations, may signal prediction failure and a need to reorient behaviour. The pupil dilation response (PDR) to violations has been associated with subcortical signals of arousal and prediction resetting. However, it is unclear how the context in which a deviant occurs affects the size of the PDR. Using ecological musical stimuli that we characterised using a computational model, we showed that the PDR to pitch deviants is sensitive to contextual uncertainty (quantified as entropy), whereby the PDR was greater in low than high entropy contexts. The PDR was also positively correlated with unexpectedness of notes. No effects of music expertise were found, suggesting a ceiling effect due to enculturation. These results show that the same sudden environmental change can lead to differing arousal levels depending on contextual factors, providing evidence for a sensitivity of the PDR to long-term context

    Siu-Lan Tan, Peter Pfordresher, & Rom Harré, Psychology of music: From sound to significance. Hove, UK: Psychology Press, 2010. ISBN: 978-1-84169-868-7 (hardcover)

    Get PDF
    Siu-Lan Tan, Peter Pfordresher, & Rom Harré, Psychology of music: From sound to significance. Hove, UK: Psychology Press, 2010. ISBN: 978-1-84169-868-7 (hardcover

    Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation

    Get PDF
    Engineering and Physical Sciences Research Council (EPSRC) funding via grant EP/M000702/1

    Modelling Perception of Structure and Affect in Music: Spectral Centroid and Wishart's Red Bird

    Get PDF
    Pearce (2011) provides a positive and interesting response to our article on time series analysis of the influences of acoustic properties on real-time perception of structure and affect in a section of Trevor Wishart’s Red Bird (Dean & Bailes, 2010). We address the following topics raised in the response and our paper. First, we analyse in depth the possible influence of spectral centroid, a timbral feature of the acoustic stream distinct from the high level general parameter we used initially, spectral flatness. We find that spectral centroid, like spectral flatness, is not a powerful predictor of real-time responses, though it does show some features that encourage its continued consideration. Second, we discuss further the issue of studying both individual responses, and as in our paper, group averaged responses. We show that a multivariate Vector Autoregression model handles the grand average series quite similarly to those of individual members of our participant groups, and we analyse this in greater detail with a wide range of approaches in work which is in press and continuing. Lastly, we discuss the nature and intent of computational modelling of cognition using acoustic and music- or information theoretic data streams as predictors, and how the music- or information theoretic approaches may be applied to electroacoustic music, which is ‘sound-based’ rather than note-centred like Western classical music

    Decoding expectation and surprise in dementia: the paradigm of music

    Get PDF
    Making predictions about the world and responding appropriately to unexpected events are essential functions of the healthy brain. In neurodegenerative disorders, such as frontotemporal dementia and Alzheimer's disease, impaired processing of 'surprise' may underpin a diverse array of symptoms, particularly abnormalities of social and emotional behaviour, but is challenging to characterize. Here, we addressed this issue using a novel paradigm: music. We studied 62 patients (24 female; aged 53-88) representing major syndromes of frontotemporal dementia (behavioural variant, semantic variant primary progressive aphasia, non-fluent-agrammatic variant primary progressive aphasia) and typical amnestic Alzheimer's disease, in relation to 33 healthy controls (18 female; aged 54-78). Participants heard famous melodies containing no deviants or one of three types of deviant note-acoustic (white-noise burst), syntactic (key-violating pitch change) or semantic (key-preserving pitch change). Using a regression model that took elementary perceptual, executive and musical competence into account, we assessed accuracy detecting melodic deviants and simultaneously recorded pupillary responses and related these to deviant surprise value (information-content) and carrier melody predictability (entropy), calculated using an unsupervised machine learning model of music. Neuroanatomical associations of deviant detection accuracy and coupling of detection to deviant surprise value were assessed using voxel-based morphometry of patients' brain MRI. Whereas Alzheimer's disease was associated with normal deviant detection accuracy, behavioural and semantic variant frontotemporal dementia syndromes were associated with strikingly similar profiles of impaired syntactic and semantic deviant detection accuracy and impaired behavioural and autonomic sensitivity to deviant information-content (all P < 0.05). On the other hand, non-fluent-agrammatic primary progressive aphasia was associated with generalized impairment of deviant discriminability (P < 0.05) due to excessive false-alarms, despite retained behavioural and autonomic sensitivity to deviant information-content and melody predictability. Across the patient cohort, grey matter correlates of acoustic deviant detection accuracy were identified in precuneus, mid and mesial temporal regions; correlates of syntactic deviant detection accuracy and information-content processing, in inferior frontal and anterior temporal cortices, putamen and nucleus accumbens; and a common correlate of musical salience coding in supplementary motor area (all P < 0.05, corrected for multiple comparisons in pre-specified regions of interest). Our findings suggest that major dementias have distinct profiles of sensory 'surprise' processing, as instantiated in music. Music may be a useful and informative paradigm for probing the predictive decoding of complex sensory environments in neurodegenerative proteinopathies, with implications for understanding and measuring the core pathophysiology of these diseases

    Tonality tunes the statistical characteristics in music: Computational approaches on statistical learning

    No full text
    Statistical learning is a learning mechanism based on transition probability in sequences such as music and language. Recent computational and neurophysiological studies suggest that the statistical learning contributes to production, action, and musical creativity as well as prediction and perception. The present study investigated how statistical structure interacts with tonalities in music based on various-order statistical models. To verify this in all 24 major and minor keys, the transition probabilities of the sequences containing the highest pitches in Bach’s Well-Tempered Clavier, which is a collection of two series (No. 1 and No. 2) of preludes and fugues in all of the 24 major and minor keys, were calculated based on nth-order Markov models. The transition probabilities of each sequence were compared among tonalities (major and minor), two series (No. 1 and No. 2), and music types (prelude and fugue). The differences in statistical characteristics between major and minor keys were detected in lower- but not higher-order models. The results also showed that statistical knowledge in music might be modulated by tonalities and composition periods. Furthermore, the principal component analysis detected the shared components of related keys, suggesting that the tonalities modulate statistical characteristics in music. The present study may suggest that there are at least two types of statistical knowledge in music that are interdependent on and independent of tonality, respectively
    corecore