37 research outputs found

    Distributed networks for auditory memory differentially contribute to recall precision

    Get PDF
    Re-directing attention to objects in working memory can enhance their representational fidelity. However, how this attentional enhancement of memory representations is implemented across distinct, sensory and cognitive-control brain network is unspecified. The present fMRI experiment leverages psychophysical modelling and multivariate auditory-pattern decoding as behavioral and neural proxies of mnemonic fidelity. Listeners performed an auditory syllable pitch-discrimination task and received retro-active cues to selectively attend to a to-be-probed syllable in memory. Accompanied by increased neural activation in fronto-parietal and cingulo-opercular networks, valid retro-cues yielded faster and more perceptually sensitive responses in recalling acoustic detail of memorized syllables. Information about the cued auditory object was decodable from hemodynamic response patterns in superior temporal sulcus (STS), fronto-parietal, and sensorimotor regions. However, among these regions retaining auditory memory objects, neural fidelity in the left STS and its enhancement through attention-to-memory best predicted individuals’ gain in auditory memory recall precision. Our results demonstrate how functionally discrete brain regions differentially contribute to the attentional enhancement of memory representations

    Using fMRI to investigate speech-stream segregation and auditory attention in healthy adults and patients with memory complaints

    Get PDF
    Poor memory for recent conversations is the commonest presenting symptom in patients attending a cognitive neurology clinic. They also frequently have greater difficulty following and remembering conversations in the presence of background noise and/or unattended speech. While the ability to participate in and recall conversations depends on several cognitive functions (language-processing, attention, episodic and working memory), without the ability to perform auditory scene analysis, and more specifically speech-stream segregation, recall of verbal information will be impaired as a consequence of poor initial registration, over and above impaired encoding and subsequent retrieval. This thesis investigated auditory attention and speech-stream segregation in healthy participants (‘controls’) and patients presenting with ‘poor memory’, particularly a complaint of difficulty remembering recent verbal information. Although this resulted in the recruitment of many patients with possible or probable Alzheimer’s disease, it also included patients with mild cognitive impairment (MCI) of uncertain aetiology and a few with depression. Functional MRI data revealed brain activity involved in attention, working memory and speech-stream segregation as participants attended to a speaker in the absence and presence of background speech. The study on controls demonstrated that the right anterior insula, adjacent frontal operculum, left planum temporale and precuneus were more active when the attended speaker was partially masked by unattended speech. Analyses also revealed a central role for a right hemisphere system for successful attentive listening, a system that was not modulated by administration of a central cholinesterase inhibitor. Therefore, this study identified non-auditory higher-order regions in speech-stream segregation, and the demands on a right hemisphere system during attentive listening. Administration of a central cholinesterase inhibitor did not identify any benefit in the present patient group. However, my research has identified systems that might be therapeutic targets when attempting to modulate auditory attention and speech-stream segregation in patients with neurodegenerative disease.Open Acces

    Joint Estimation of Perceptual, Cognitive, and Neural Processes

    Get PDF
    Humans are remarkable in their ability to perform highly complicated behaviors with ease and little conscious thought. Successful speech comprehension, for example, requires the collaboration of multiple sensory, perceptual, and cognitive processes to focus attention on the speaker, disregard competing cues, correctly process incoming audio stimuli, and attach meaning and context to what is heard. Investigating these phenomena can help unravel crucial aspects of human behavior as well as how the brain works in health and disease. However, traditional methods typically involve isolating individual variables and evaluating their decontextualized contribution to an outcome variable of interest. While rigorous and more straightforward to interpret, these reductionist methods forfeit multidimensional inference and waste data resources by collecting identical data in every participant without considering what is the most relevant for any given participant. Methods that can optimize the exact data collected for each participant would be useful for constructing more complex models and for optimizing expensive data collection. Modern tools, such as mobile hardware and large databases, have been implemented to improve upon traditional methods but are still limited in the amount of inference they can provide about an individual. To circumvent these obstacles, a novel machine learning framework capable of quantifying behavioral functions of multiple variables with practical amounts of data has been developed and validated. This framework is capable of linking even loosely related input domains and measuring shared information in one comprehensive assessment. The work described in this thesis first evaluates this framework for active machine learning audiogram (AMLAG) applications. AMLAG customizes the generalized framework to efficiently, accurately, and reliably estimate audiogram functions. Audiograms provide a measure of hearing ability for each ear in the inherently two-dimensional domain of frequency and intensity. Where clinical methods rely on reducing audiogram acquisition to a one-dimensional assessment, AMLAG has been previously verified to provide a continuous, two-dimensional estimate of hearing ability in one ear. Modeling two ears that are physiologically distinct but are defined in the same frequency-intensity input domain, AMLAG was extended to bilateral audiogram acquisition. Left and right ears are traditionally evaluated completely unilaterally. To realize potential gains, AMLAG was generalized from two unilateral tests to a single bilateral test. The active bilateral audiogram allows observations in one ear to simultaneously update the model fit over both ears. This thesis shows that in a cohort of normal-hearing and hearing-impaired listeners, the bilateral audiogram converges to its final estimates significantly faster than sequential active unilateral audiograms. The flexibility of a framework capable of informative individual inference was then evaluated for dynamically masked audiograms. When one ear of an individual can hear significantly better than the other ear, assessing the worse ear with loud probe tones may require delivering masking noise to the better ear in order to prevent the probe tones from inadvertently being heard by the better ear. Current masking protocols are confusing, laborious and time consuming. Adding a standardized masking protocol to the AMLAG procedure alleviates all of these drawbacks by dynamically adapting the masking to an individualŐł specific needs. Dynamically masked audiograms are shown to achieve accurate threshold estimates and reduce test time compared to current clinical masking procedures used to evaluate individuals with highly asymmetric hearing, yet can also be used effectively and efficiently for anyone. Finally, the active machine learning framework was evaluated for estimating cognitive and perceptual variables in one joint assessment. Combining a verbal N-back and speech-in-noise assessment, a joint estimator links two disjoint assessments defined by two unique input domains and, for the first time, offers a direct measurement of the interactions between two of the most predictive measures of cognitive decline. Young and older healthy adults were assessed to investigate age-related adaptations in behavior and the inter-subject variability that is often seen in low-dimensional speech and memory tests. The joint cognitive and perceptual test accurately predicted standalone N-back but not speech-in-noise performance. This first implementation did not reveal significant interactions between speech and memory. However, the joint task framework did provide an estimate of participant performance over the entire two-dimensional domain without any experimenter-observed scoring and may better mirror the challenges of real-world tasks. While significant age-related differences were apparent, substantial within group variance led to evaluating joint test performance in predicting individual differences in neural activity. Speech-in-noise tests may activate non-auditory specific networks of the brain as age and task difficulty increase. Some of these regions are domain-general networks that are also active during verbal working memory tests. Functional brain images were collected during an in-scanner speech-in-noise test for a portion of the joint test participants. Individual brain activity at regions of interest in the frontoparietal, cingulo-opercular, and speech networks was correlated to performance on the joint speech and memory test. No significant correlations were found, but the joint estimation of neural, cognitive, and perceptual behaviors through this framework may be possible with further test adaptations. Generally, the lack of significant findings does not detract from the feasibility and utility of a generalized framework that can accurately model complex cognitive, perceptual, and neural processes in individuals. As demonstrated in this thesis, high-dimensional, individual testing procedures facilitate the direct assessment of complicated human behaviors empowering equitable, informative, and effective test methods

    Hearing in dementia: defining deficits and assessing impact

    Get PDF
    The association between hearing impairment and dementia has emerged as a major public health challenge, with significant opportunities for earlier diagnosis, treatment and prevention. However, the nature of this association has not been defined. We hear with our brains, particularly within the complex soundscapes of everyday life: neurodegenerative pathologies target the auditory brain and are therefore predicted to damage hearing function early and profoundly. Here I present evidence for this proposition, based on structural and functional features of auditory brain organisation that confer vulnerability to neurodegeneration, the extensive, reciprocal interplay between ‘peripheral’ and ‘central’ hearing dysfunction, and recently characterised auditory signatures of canonical neurodegenerative dementias (Alzheimer’s disease and frontotemporal dementia). In chapter 3, I examine pure tone audiometric thresholds in AD and FTD syndromes and explore the functional interplay between the auditory brain and auditory periphery by assessing the contribution of auditory cognitive factors on pure tone detection. In chapter 4, I develop this further by examining the processing of degraded speech signals, leveraging the increased importance of top-down integrative and predictive mechanisms on resolving impoverished bottom-up sensory encoding. In chapter 5, I use a more discrete test of phonological processing to focus in on a specific brain region that is an early target in logopenic aphasia, to explore the potential of auditory cognitive tests as disease specific functional biomarkers. Finally, in chapter 6, I use auditory symptom questionnaires to capture real-world hearing in daily life amongst patients with dementia as well as their carers and measure how this correlates with audiometric performance and degraded speech processing. I call for a clinical assessment of real-world hearing in these diseases that moves beyond pure tone perception to the development of novel auditory ‘cognitive stress tests’ and proximity markers for the early diagnosis of dementia and management strategies that harness retained auditory plasticity

    Anxiety disorders in youth : the role of neural networks and emotion regulation

    Get PDF

    The neural architecture of emotional intelligence.

    Get PDF
    Emotional Intelligence (EI) is a nebulous concept that permeates daily interpersonal communication. Despite prolific research into its benefits, EI subjective measurement is difficult, contributing to an enigmatic definition of its core constructs. However, neuroimaging research probing socioaffective brain mechanisms underlying putative EI constructs can add an objective perspective to existing models, thereby illuminating the nature of EI. Therefore, the primary aim of this dissertation is to identify brain networks underlying EI and examine how EI arises from the brain’s functional and structural neuroarchitecture. EI is first defined according to behavioral data, which suggests EI is made up of two core constructs: Empathy and Emotion Regulation (ER). The interaction of brain networks underlying Empathy and ER is then investigated using a novel neuroimaging analysis method: dynamic functional connectivity (dynFC). The results suggest efficient communication and (re)configuration between the CEN, DMN, SN underlie both ER and RME task dynamics, and that these temporal patterns relate to trait empathy and ER tendency. Given the demonstrated behavioral and neurobiological relationship between empathy and ER, our second aim is to examine each of these constructs individually through detailed experiments using a variety of neuroimaging methodologies. The dissertation concludes by proposing EI is an ability that arises from the effective, yet flexible communication between brain networks underlying Empathy and ER. The dissertation is divided into five chapters. Chapter I describes the foundational concept of EI as originally described by a variety of psychological figures and the lacuna that exists in terms of its neural correlates. Chapter II presents behavioral data that proposes EI is best predicted by Empathy and ER. Chapter III explores the dynamic relationship between brain networks underlying Empathy and ER, with the aim of elucidating their neurobiological associations, and investigate how such associations may combine to create EI. Chapter IV examines Empathy closely, by probing its neurobiological relationship to interoception and anxiety. Chapter V examines ER closely, by investigating whether gender plays a role in ER, and its neurobiological relationship to hormones. Chapter VI links the general findings from Chapters III, IV and V, and proposes an integrative neurocognitive EI model. The dissertation concludes by providing clinical and non-clinical applications for the model

    Processing of nonverbal vocalisations in dementia

    Get PDF
    Nonverbal emotional vocalisations are fundamental communicative signals used to convey a diverse repertoire of social and emotional information. They transcend the boundaries of language and cultural specificity that hamper many neuropsychological tests, making them ideal candidates for understanding impaired socio-emotional signal processing in dementia. Symptoms related to changes in social behaviour and emotional responsiveness are poorly understood yet have significant impact on patients with dementia and those who care for them. In this thesis, I investigated processing of nonverbal emotional vocalisations in patients with Alzheimer’s disease and frontotemporal dementia (FTD), a disease spectrum encompassing three canonical syndromes characterised by marked socio-emotional and communication difficulties - behavioural variant FTD (bvFTD), semantic variant primary progressive aphasia (svPPA) and nonfluent/agrammatic variant primary progressive aphasia (nfvPPA). I demonstrated distinct profiles of impairment in identifying three salient vocalisations (laughter, crying and screaming) and the emotions they convey. All three FTD syndromes showed impairments, with the most marked deficits of emotion categorisation seen in the bvFTD group. Voxel-based morphometry was used to define critical brain substrates for processing vocalisations, identifying correlates of vocal sound processing with auditory perceptual regions (superior temporal sulcus and posterior insula) and emotion identification with limbic and medial frontal regions. The second half of this thesis focused on the more fine-grained distinction of laughter subtypes. I studied cognitive (labelling), affective (valence) and autonomic (pupillometric) processing of laughter subtypes representing dimensions of valence (mirthful versus hostile) and arousal (spontaneous versus posed). Again, FTD groups showed greatest impairment with profiles suggestive of primary perceptual deficits in nfvPPA, cognitive overgeneralisation in svPPA and disordered reward and hedonic valuation in bvFTD. Neuroanatomical correlates of explicit laughter identification included inferior frontal and cingulo-insular cortices whilst implicit processing (indexed as autonomic arousal) was particularly impaired in those conditions associated with insular compromise (nfvPPA and bvFTD). These findings demonstrate the potential of nonverbal emotional vocalisations as a probe of neural mechanisms underpinning socio-emotional dysfunction in neurodegenerative diseases
    corecore