1,356 research outputs found

    Real-time detection of auditory : steady-state brainstem potentials evoked by auditory stimuli

    Get PDF
    The auditory steady-state response (ASSR) is advantageous against other hearing techniques because of its capability in providing objective and frequency specific information. The objectives are to reduce the lengthy test duration, and improve the signal detection rate and the robustness of the detection against the background noise and unwanted artefacts.Two prominent state estimation techniques of Luenberger observer and Kalman filter have been used in the development of the autonomous ASSR detection scheme. Both techniques are real-time implementable, while the challenges faced in the application of the observer and Kalman filter techniques are the very poor SNR (could be as low as −30dB) of ASSRs and unknown statistics of the noise. Dual-channel architecture is proposed, one is for the estimate of sinusoid and the other for the estimate of the background noise. Simulation and experimental studies were also conducted to evaluate the performances of the developed ASSR detection scheme, and to compare the new method with other conventional techniques. In general, both the state estimation techniques within the detection scheme produced comparable results as compared to the conventional techniques, but achieved significant measurement time reduction in some cases. A guide is given for the determination of the observer gains, while an adaptive algorithm has been used for adjustment of the gains in the Kalman filters.In order to enhance the robustness of the ASSR detection scheme with adaptive Kalman filters against possible artefacts (outliers), a multisensory data fusion approach is used to combine both standard mean operation and median operation in the ASSR detection algorithm. In addition, a self-tuned statistical-based thresholding using the regression technique is applied in the autonomous ASSR detection scheme. The scheme with adaptive Kalman filters is capable of estimating the variances of system and background noise to improve the ASSR detection rate

    Single-Trial Sparse Representation-Based Approach for VEP Extraction

    Get PDF

    FUNCTIONAL NETWORK CONNECTIVITY IN HUMAN BRAIN AND ITS APPLICATIONS IN AUTOMATIC DIAGNOSIS OF BRAIN DISORDERS

    Get PDF
    The human brain is one of the most complex systems known to the mankind. Over the past 3500 years, mankind has constantly investigated this remarkable system in order to understand its structure and function. Emerging of neuroimaging techniques such as functional magnetic resonance imaging (fMRI) have opened a non-invasive in-vivo window into brain function. Moreover, fMRI has made it possible to study brain disorders such as schizophrenia from a different angle unknown to researchers before. Human brain function can be divided into two categories: functional segregation and integration. It is well-understood that each region in the brain is specialized in certain cognitive or motor tasks. The information processed in these specialized regions in different temporal and spatial scales must be integrated in order to form a unified cognition or behavior. One way to assess functional integration is by measuring functional connectivity (FC) among specialized regions in the brain. Recently, there is growing interest in studying the FC among brain functional networks. This type of connectivity, which can be considered as a higher level of FC, is termed functional network connectivity (FNC) and measures the statistical dependencies among brain functional networks. Each functional network may consist of multiple remote brain regions. Four studies related to FNC are presented in this work. First FNC is compared during the resting-state and auditory oddball task (AOD). Most previous FNC studies have been focused on either resting-state or task-based data but have not directly compared these two. Secondly we propose an automatic diagnosis framework based on resting-state FNC features for mental disorders such as schizophrenia. Then, we investigate the proper preprocessing for fMRI time-series in order to conduct FNC studies. Specifically the impact of autocorrelated time-series on FNC will be comprehensively assessed in theory, simulation and real fMRI data. At the end, the notion of autoconnectivity as a new perspective on human brain functionality will be proposed. It will be shown that autoconnectivity is cognitive-state and mental-state dependent and we discuss how this source of information, previously believed to originate from physical and physiological noise, can be used to discriminate schizophrenia patients with high accuracy

    Neural Mechanisms of Sensory Integration: Frequency Domain Analysis of Spike and Field Potential Activity During Arm Position Maintenance with and Without Visual Feedback

    Get PDF
    abstract: Understanding where our bodies are in space is imperative for motor control, particularly for actions such as goal-directed reaching. Multisensory integration is crucial for reducing uncertainty in arm position estimates. This dissertation examines time and frequency-domain correlates of visual-proprioceptive integration during an arm-position maintenance task. Neural recordings were obtained from two different cortical areas as non-human primates performed a center-out reaching task in a virtual reality environment. Following a reach, animals maintained the end-point position of their arm under unimodal (proprioception only) and bimodal (proprioception and vision) conditions. In both areas, time domain and multi-taper spectral analysis methods were used to quantify changes in the spiking, local field potential (LFP), and spike-field coherence during arm-position maintenance. In both areas, individual neurons were classified based on the spectrum of their spiking patterns. A large proportion of cells in the SPL that exhibited sensory condition-specific oscillatory spiking in the beta (13-30Hz) frequency band. Cells in the IPL typically had a more diverse mix of oscillatory and refractory spiking patterns during the task in response to changing sensory condition. Contrary to the assumptions made in many modelling studies, none of the cells exhibited Poisson-spiking statistics in SPL or IPL. Evoked LFPs in both areas exhibited greater effects of target location than visual condition, though the evoked responses in the preferred reach direction were generally suppressed in the bimodal condition relative to the unimodal condition. Significant effects of target location on evoked responses were observed during the movement period of the task well. In the frequency domain, LFP power in both cortical areas was enhanced in the beta band during the position estimation epoch of the task, indicating that LFP beta oscillations may be important for maintaining the ongoing state. This was particularly evident at the population level, with clear increase in alpha and beta power. Differences in spectral power between conditions also became apparent at the population level, with power during bimodal trials being suppressed relative to unimodal. The spike-field coherence showed confounding results in both the SPL and IPL, with no clear correlation between incidence of beta oscillations and significant beta coherence.Dissertation/ThesisDoctoral Dissertation Biomedical Engineering 201

    TEMPORAL CODING OF SPEECH IN HUMAN AUDITORY CORTEX

    Get PDF
    Human listeners can reliably recognize speech in complex listening environments. The underlying neural mechanisms, however, remain unclear and cannot yet be emulated by any artificial system. In this dissertation, we study how speech is represented in the human auditory cortex and how the neural representation contributes to reliable speech recognition. Cortical activity from normal hearing human subjects is noninvasively recorded using magnetoencephalography, during natural speech listening. It is first demonstrated that neural activity from auditory cortex is precisely synchronized to the slow temporal modulations of speech, when the speech signal is presented in a quiet listening environment. How this neural representation is affected by acoustic interference is then investigated. Acoustic interference degrades speech perception via two mechanisms, informational masking and energetic masking, which are addressed respectively by using a competing speech stream and a stationary noise as the interfering sound. When two speech streams are presented simultaneously, cortical activity is predominantly synchronized to the speech stream the listener attends to, even if the unattended, competing speech stream is 8 dB more intense. When speech is presented together with spectrally matched stationary noise, cortical activity remains precisely synchronized to the temporal modulations of speech until the noise is 9 dB more intense. Critically, the accuracy of neural synchronization to speech predicts how well individual listeners can understand speech in noise. Further analysis reveals that two neural sources contribute to speech synchronized cortical activity, one with a shorter response latency of about 50 ms and the other with a longer response latency of about 100 ms. The longer-latency component, but not the shorter-latency component, shows selectivity to the attended speech and invariance to background noise, indicating a transition from encoding the acoustic scene to encoding the behaviorally important auditory object, in auditory cortex. Taken together, we have demonstrated that during natural speech comprehension, neural activity in the human auditory cortex is precisely synchronized to the slow temporal modulations of speech. This neural synchronization is robust to acoustic interference, whether speech or noise, and therefore provides a strong candidate for the neural basis of acoustic background invariant speech recognition

    Measuring latency variations in evoked potential components using a simple auto-correlation technique

    Get PDF
    Interpretation of averaged evoked potentials is difficult when the time relationship between stimulus and response is not constant. Later components are more prone to latency jitter, making them insufficiently reliable for routine clinical use even though they could contribute to greater understanding of the functioning of polysynaptic components of the afferent nervous system. This study is aimed at providing a simple but effective method of identifying and quantifying latency jitter in averaged evoked potentials. Autocorrelation techniques were applied within defined time windows on simulated jittered signals embedded within the noise component of recorded evoked potentials and on real examples of somatosensory evoked potentials. We demonstrated that the technique accurately identifies the distribution and maximum levels of jitter of the simulated components and clearly identifies the jitter properties of real evoked potential recording components. This method is designed to complement the conventional analytical methods used in neurophysiological practice to provide valuable additional information about the distribution of latency jitter within an averaged evoked potential. It will be useful for the assessment of the reliability of averaged components and will aid the interpretation of longer-latency, polysynaptic components such as those found in nociceptive evoked potentials

    Processing neuroelectric data

    Get PDF
    "July 7, 1959"--Cover.Includes bibliographies.Army Signal Corps Contract DA36-039-sc-78108. Dept. of the Army Task 3-99-06-108 and Project 3-99-00-100.by Communications Biophysics Group of Research Laboratory of Electronics and William M. Siebert
    • …
    corecore