3,400 research outputs found

    Predictive cognition in dementia: the case of music

    Get PDF
    The clinical complexity and pathological diversity of neurodegenerative diseases impose immense challenges for diagnosis and the design of rational interventions. To address these challenges, there is a need to identify new paradigms and biomarkers that capture shared pathophysiological processes and can be applied across a range of diseases. One core paradigm of brain function is predictive coding: the processes by which the brain establishes predictions and uses them to minimise prediction errors represented as the difference between predictions and actual sensory inputs. The processes involved in processing unexpected events and responding appropriately are vulnerable in common dementias but difficult to characterise. In my PhD work, I have exploited key properties of music – its universality, ecological relevance and structural regularity – to model and assess predictive cognition in patients representing major syndromes of frontotemporal dementia – non-fluent variant PPA (nfvPPA), semantic-variant PPA (svPPA) and behavioural-variant FTD (bvFTD) - and Alzheimer’s disease relative to healthy older individuals. In my first experiment, I presented patients with well-known melodies containing no deviants or one of three types of deviant - acoustic (white-noise burst), syntactic (key-violating pitch change) or semantic (key-preserving pitch change). I assessed accuracy detecting melodic deviants and simultaneously-recorded pupillary responses to these deviants. I used voxel-based morphometry to define neuroanatomical substrates for the behavioural and autonomic processing of these different types of deviants, and identified a posterior temporo-parietal network for detection of basic acoustic deviants and a more anterior fronto-temporo-striatal network for detection of syntactic pitch deviants. In my second chapter, I investigated the ability of patients to track the statistical structure of the same musical stimuli, using a computational model of the information dynamics of music to calculate the information-content of deviants (unexpectedness) and entropy of melodies (uncertainty). I related these information-theoretic metrics to performance for detection of deviants and to ‘evoked’ and ‘integrative’ pupil reactivity to deviants and melodies respectively and found neuroanatomical correlates in bilateral dorsal and ventral striatum, hippocampus, superior temporal gyri, right temporal pole and left inferior frontal gyrus. Together, chapters 3 and 4 revealed new hypotheses about the way FTD and AD pathologies disrupt the integration of predictive errors with predictions: a retained ability of AD patients to detect deviants at all levels of the hierarchy with a preserved autonomic sensitivity to information-theoretic properties of musical stimuli; a generalized impairment of surprise detection and statistical tracking of musical information at both a cognitive and autonomic levels for svPPA patients underlying a diminished precision of predictions; the exact mirror profile of svPPA patients in nfvPPA patients with an abnormally high rate of false-alarms with up-regulated pupillary reactivity to deviants, interpreted as over-precise or inflexible predictions accompanied with normal cognitive and autonomic probabilistic tracking of information; an impaired behavioural and autonomic reactivity to unexpected events with a retained reactivity to environmental uncertainty in bvFTD patients. Chapters 5 and 6 assessed the status of reward prediction error processing and updating via actions in bvFTD. I created pleasant and aversive musical stimuli by manipulating chord progressions and used a classic reinforcement-learning paradigm which asked participants to choose the visual cue with the highest probability of obtaining a musical ‘reward’. bvFTD patients showed reduced sensitivity to the consequence of an action and lower learning rate in response to aversive stimuli compared to reward. These results correlated with neuroanatomical substrates in ventral and dorsal attention networks, dorsal striatum, parahippocampal gyrus and temporo-parietal junction. Deficits were governed by the level of environmental uncertainty with normal learning dynamics in a structured and binarized environment but exacerbated deficits in noisier environments. Impaired choice accuracy in noisy environments correlated with measures of ritualistic and compulsive behavioural changes and abnormally reduced learning dynamics correlated with behavioural changes related to empathy and theory-of-mind. Together, these experiments represent the most comprehensive attempt to date to define the way neurodegenerative pathologies disrupts the perceptual, behavioural and physiological encoding of unexpected events in predictive coding terms

    Deep Cross-Modal Correlation Learning for Audio and Lyrics in Music Retrieval

    Get PDF
    Deep cross-modal learning has successfully demonstrated excellent performance in cross-modal multimedia retrieval, with the aim of learning joint representations between different data modalities. Unfortunately, little research focuses on cross-modal correlation learning where temporal structures of different data modalities such as audio and lyrics should be taken into account. Stemming from the characteristic of temporal structures of music in nature, we are motivated to learn the deep sequential correlation between audio and lyrics. In this work, we propose a deep cross-modal correlation learning architecture involving two-branch deep neural networks for audio modality and text modality (lyrics). Data in different modalities are converted to the same canonical space where inter modal canonical correlation analysis is utilized as an objective function to calculate the similarity of temporal structures. This is the first study that uses deep architectures for learning the temporal correlation between audio and lyrics. A pre-trained Doc2Vec model followed by fully-connected layers is used to represent lyrics. Two significant contributions are made in the audio branch, as follows: i) We propose an end-to-end network to learn cross-modal correlation between audio and lyrics, where feature extraction and correlation learning are simultaneously performed and joint representation is learned by considering temporal structures. ii) As for feature extraction, we further represent an audio signal by a short sequence of local summaries (VGG16 features) and apply a recurrent neural network to compute a compact feature that better learns temporal structures of music audio. Experimental results, using audio to retrieve lyrics or using lyrics to retrieve audio, verify the effectiveness of the proposed deep correlation learning architectures in cross-modal music retrieval

    Listening in the mix: lead vocals robustly attract auditory attention in popular music

    Get PDF
    Listeners can attend to and track instruments or singing voices in complex musical mixtures, even though the acoustical energy of sounds from individual instruments may overlap in time and frequency. In popular music, lead vocals are often accompanied by sound mixtures from a variety of instruments, such as drums, bass, keyboards, and guitars. However, little is known about how the perceptual organization of such musical scenes is affected by selective attention, and which acoustic features play the most important role. To investigate these questions, we explored the role of auditory attention in a realistic musical scenario. We conducted three online experiments in which participants detected single cued instruments or voices in multi-track musical mixtures. Stimuli consisted of 2-s multi-track excerpts of popular music. In one condition, the target cue preceded the mixture, allowing listeners to selectively attend to the target. In another condition, the target was presented after the mixture, requiring a more “global” mode of listening. Performance differences between these two conditions were interpreted as effects of selective attention. In Experiment 1, results showed that detection performance was generally dependent on the target’s instrument category, but listeners were more accurate when the target was presented prior to the mixture rather than the opposite. Lead vocals appeared to be nearly unaffected by this change in presentation order and achieved the highest accuracy compared with the other instruments, which suggested a particular salience of vocal signals in musical mixtures. In Experiment 2, filtering was used to avoid potential spectral masking of target sounds. Although detection accuracy increased for all instruments, a similar pattern of results was observed regarding the instrument-specific differences between presentation orders. In Experiment 3, adjusting the sound level differences between the targets reduced the effect of presentation order, but did not affect the differences between instruments. While both acoustic manipulations facilitated the detection of targets, vocal signals remained particularly salient, which suggest that the manipulated features did not contribute to vocal salience. These findings demonstrate that lead vocals serve as robust attractor points of auditory attention regardless of the manipulation of low-level acoustical cues
    • 

    corecore