3,791 research outputs found

    The mechanisms of tinnitus: perspectives from human functional neuroimaging

    Get PDF
    In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power

    AUTOMATED ARTIFACT REMOVAL AND DETECTION OF MILD COGNITIVE IMPAIRMENT FROM SINGLE CHANNEL ELECTROENCEPHALOGRAPHY SIGNALS FOR REAL-TIME IMPLEMENTATIONS ON WEARABLES

    Get PDF
    Electroencephalogram (EEG) is a technique for recording asynchronous activation of neuronal firing inside the brain with non-invasive scalp electrodes. EEG signal is well studied to evaluate the cognitive state, detect brain diseases such as epilepsy, dementia, coma, autism spectral disorder (ASD), etc. In this dissertation, the EEG signal is studied for the early detection of the Mild Cognitive Impairment (MCI). MCI is the preliminary stage of Dementia that may ultimately lead to Alzheimers disease (AD) in the elderly people. Our goal is to develop a minimalistic MCI detection system that could be integrated to the wearable sensors. This contribution has three major aspects: 1) cleaning the EEG signal, 2) detecting MCI, and 3) predicting the severity of the MCI using the data obtained from a single-channel EEG electrode. Artifacts such as eye blink activities can corrupt the EEG signals. We investigate unsupervised and effective removal of ocular artifact (OA) from single-channel streaming raw EEG data. Wavelet transform (WT) decomposition technique was systematically evaluated for effectiveness of OA removal for a single-channel EEG system. Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), is studied with four WT basis functions: haar, coif3, sym3, and bior4.4. The performance of the artifact removal algorithm was evaluated by the correlation coefficients (CC), mutual information (MI), signal to artifact ratio (SAR), normalized mean square error (NMSE), and time-frequency analysis. It is demonstrated that WT can be an effective tool for unsupervised OA removal from single channel EEG data for real-time applications.For the MCI detection from the clean EEG data, we collected the scalp EEG data, while the subjects were stimulated with five auditory speech signals. We extracted 590 features from the Event-Related Potential (ERP) of the collected EEG signals, which included time and spectral domain characteristics of the response. The top 25 features, ranked by the random forest method, were used for classification models to identify subjects with MCI. Robustness of our model was tested using leave-one-out cross-validation while training the classifiers. Best results (leave-one-out cross-validation accuracy 87.9%, sensitivity 84.8%, specificity 95%, and F score 85%) were obtained using support vector machine (SVM) method with Radial Basis Kernel (RBF) (sigma = 10, cost = 102). Similar performances were also observed with logistic regression (LR), further validating the results. Our results suggest that single-channel EEG could provide a robust biomarker for early detection of MCI. We also developed a single channel Electro-encephalography (EEG) based MCI severity monitoring algorithm by generating the Montreal Cognitive Assessment (MoCA) scores from the features extracted from EEG. We performed multi-trial and single-trail analysis for the algorithm development of the MCI severity monitoring. We studied Multivariate Regression (MR), Ensemble Regression (ER), Support Vector Regression (SVR), and Ridge Regression (RR) for multi-trial and deep neural regression for the single-trial analysis. In the case of multi-trial, the best result was obtained from the ER. In our single-trial analysis, we constructed the time-frequency image from each trial and feed it to the convolutional deep neural network (CNN). Performance of the regression models was evaluated by the RMSE and the residual analysis. We obtained the best accuracy with the deep neural regression method

    Electroencephalography (EEG) and Unconsciousness

    Get PDF

    Computational modelling of neural mechanisms underlying natural speech perception

    Get PDF
    Humans are highly skilled at the analysis of complex auditory scenes. In particular, the human auditory system is characterized by incredible robustness to noise and can nearly effortlessly isolate the voice of a specific talker from even the busiest of mixtures. However, neural mechanisms underlying these remarkable properties remain poorly understood. This is mainly due to the inherent complexity of speech signals and multi-stage, intricate processing performed in the human auditory system. Understanding these neural mechanisms underlying speech perception is of interest for clinical practice, brain-computer interfacing and automatic speech processing systems. In this thesis, we developed computational models characterizing neural speech processing across different stages of the human auditory pathways. In particular, we studied the active role of slow cortical oscillations in speech-in-noise comprehension through a spiking neural network model for encoding spoken sentences. The neural dynamics of the model during noisy speech encoding reflected speech comprehension of young, normal-hearing adults. The proposed theoretical model was validated by predicting the effects of non-invasive brain stimulation on speech comprehension in an experimental study involving a cohort of volunteers. Moreover, we developed a modelling framework for detecting the early, high-frequency neural response to the uninterrupted speech in non-invasive neural recordings. We applied the method to investigate top-down modulation of this response by the listener's selective attention and linguistic properties of different words from a spoken narrative. We found that in both cases, the detected responses of predominantly subcortical origin were significantly modulated, which supports the functional role of feedback, between higher- and lower levels stages of the auditory pathways, in speech perception. The proposed computational models shed light on some of the poorly understood neural mechanisms underlying speech perception. The developed methods can be readily employed in future studies involving a range of experimental paradigms beyond these considered in this thesis.Open Acces

    Neural Activity Patterns in Response to Interspecific and Intraspecific Variation in Mating Calls in the Túngara Frog

    Get PDF
    During mate choice, individuals must classify potential mates according to species identity and relative attractiveness. In many species, females do so by evaluating variation in the signals produced by males. Male túngara frogs (Physalaemus pustulosus) can produce single note calls (whines) and multi-note calls (whine-chucks). While the whine alone is sufficient for species recognition, females greatly prefer the whine-chuck when given a choice.To better understand how the brain responds to variation in male mating signals, we mapped neural activity patterns evoked by interspecific and intraspecific variation in mating calls in túngara frogs by measuring expression of egr-1. We predicted that egr-1 responses to conspecific calls would identify brain regions that are potentially important for species recognition and that at least some of those brain regions would vary in their egr-1 responses to mating calls that vary in attractiveness. We measured egr-1 in the auditory brainstem and its forebrain targets and found that conspecific whine-chucks elicited greater egr-1 expression than heterospecific whines in all but three regions. We found no evidence that preferred whine-chuck calls elicited greater egr-1 expression than conspecific whines in any of eleven brain regions examined, in contrast to predictions that mating preferences in túngara frogs emerge from greater responses in the auditory system.Although selectivity for species-specific signals is apparent throughout the túngara frog brain, further studies are necessary to elucidate how neural activity patterns vary with the attractiveness of conspecific mating calls

    Microelectronic circuits for noninvasive ear type assistive devices

    Get PDF
    An ear type system and its circuit realization with application as new assistive devices are investigated. The auditory brainstem responses obtained from clinical hearing measurements are utilized for which the ear type systems mimicking the physical and behavioral characteristics of the individual auditory system are developed. In the case that effects from the hearing loss and disorder can be detected via the measured responses, differentiations between normal and impaired characteristics of the human auditory system are made possible from which the new noninvasive way of correcting these undesired effects is proposed. The ear type system of auditory brainstem response is developed using an adaptation of the nonlinear neural network architecture and the system for making a correction is realized using the derived inverse of neural network. Microelectronic circuits of the systems are designed and simulated showing a possibility of developing into a hearing aid type device which potentially helps hearing impaired patients in an alternate and noninvasive useful way

    Sound processing in the mouse auditory cortex: organization, modulation, and transformation

    Full text link
    The auditory system begins with the cochlea, a frequency analyzer and signal amplifier with exquisite precision. As neural information travels towards higher brain regions, the encoding becomes less faithful to the sound waveform itself and more influenced by non-sensory factors such as top-down attentional modulation, local feedback modulation, and long-term changes caused by experience. At the level of auditory cortex (ACtx), such influences exhibit at multiple scales from single neurons to cortical columns to topographic maps, and are known to be linked with critical processes such as auditory perception, learning, and memory. How the ACtx integrates a wealth of diverse inputs while supporting adaptive and reliable sound representations is an important unsolved question in auditory neuroscience. This dissertation tackles this question using the mouse as an animal model. We begin by describing a detailed functional map of receptive fields within the mouse ACtx. Focusing on the frequency tuning properties, we demonstrated a robust tonotopic organization in the core ACtx fields (A1 and AAF) across cortical layers, neural signal types, and anesthetic states, confirming the columnar organization of basic sound processing in ACtx. We then studied the bottom-up input to ACtx columns by optogenetically activating the inferior colliculus (IC), and observed feedforward neuronal activity in the frequency-matched column, which also induced clear auditory percepts in behaving mice. Next, we used optogenetics to study layer 6 corticothalamic neurons (L6CT) that project heavily to the thalamus and upper layers of ACtx. We found that L6CT activation biases sound perception towards either enhanced detection or discrimination depending on its relative timing with respect to the sound, a process that may support dynamic filtering of auditory information. Finally, we optogenetically isolated cholinergic neurons in the basal forebrain (BF) that project to ACtx and studied their involvement in columnar ACtx plasticity during associative learning. In contrast to previous notions that BF just encodes reward and punishment, we observed clear auditory responses from the cholinergic neurons, which exhibited rapid learning-induced plasticity, suggesting that BF may provide a key instructive signal to drive adaptive plasticity in ACtx
    • …
    corecore