2,427 research outputs found
Frequency Recognition in SSVEP-based BCI using Multiset Canonical Correlation Analysis
Canonical correlation analysis (CCA) has been one of the most popular methods
for frequency recognition in steady-state visual evoked potential (SSVEP)-based
brain-computer interfaces (BCIs). Despite its efficiency, a potential problem
is that using pre-constructed sine-cosine waves as the required reference
signals in the CCA method often does not result in the optimal recognition
accuracy due to their lack of features from the real EEG data. To address this
problem, this study proposes a novel method based on multiset canonical
correlation analysis (MsetCCA) to optimize the reference signals used in the
CCA method for SSVEP frequency recognition. The MsetCCA method learns multiple
linear transforms that implement joint spatial filtering to maximize the
overall correlation among canonical variates, and hence extracts SSVEP common
features from multiple sets of EEG data recorded at the same stimulus
frequency. The optimized reference signals are formed by combination of the
common features and completely based on training data. Experimental study with
EEG data from ten healthy subjects demonstrates that the MsetCCA method
improves the recognition accuracy of SSVEP frequency in comparison with the CCA
method and other two competing methods (multiway CCA (MwayCCA) and phase
constrained CCA (PCCA)), especially for a small number of channels and a short
time window length. The superiority indicates that the proposed MsetCCA method
is a new promising candidate for frequency recognition in SSVEP-based BCIs
Identification of audio evoked response potentials in ambulatory EEG data
Electroencephalography (EEG) is commonly used for observing brain function over a period of time. It employs a set of invasive electrodes on the scalp to measure the electrical activity of the brain. EEG is mainly used by researchers and clinicians to study the brain’s responses to a specific stimulus - the event-related potentials (ERPs). Different types of undesirable signals, which are known as artefacts, contaminate the EEG signal. EEG and ERP signals are very small (in the order of microvolts); they are often obscured by artefacts with much larger amplitudes in the order of millivolts. This greatly increases the difficulty of interpreting EEG and ERP signals.Typically, ERPs are observed by averaging EEG measurements made with many repetitions of the stimulus. The average may require many tens of repetitions before the ERP signal can be observed with any confidence. This greatly limits the study and useof ERPs. This project explores more sophisticated methods of ERP estimation from measured EEGs. An Optimal Weighted Mean (OWM) method is developed that forms a weighted average to maximise the signal to noise ratio in the mean. This is developedfurther into a Bayesian Optimal Combining (BOC) method where the information in repetitions of ERP measures is combined to provide a sequence of ERP estimations with monotonically decreasing uncertainty. A Principal Component Analysis (PCA) isperformed to identify the basis of signals that explains the greatest amount of ERP variation. Projecting measured EEG signals onto this basis greatly reduces the noise in measured ERPs. The PCA filtering can be followed by OWM or BOC. Finally, crosschannel information can be used. The ERP signal is measured on many electrodes simultaneously and an improved estimate can be formed by combining electrode measurements. A MAP estimate, phrased in terms of Kalman Filtering, is developed using all electrode measurements.The methods developed in this project have been evaluated using both synthetic and measured EEG data. A synthetic, multi-channel ERP simulator has been developed specifically for this project.Numerical experiments on synthetic ERP data showed that Bayesian Optimal Combining of trial data filtered using a combination of PCA projection and Kalman Filtering, yielded the best estimates of the underlying ERP signal. This method has been applied to subsets of real Ambulatory Electroencephalography (AEEG) data, recorded while participants performed a range of activities in different environments. From this analysis, the number of trials that need to be collected to observe the P300 amplitude and delay has been calculated for a range of scenarios
Assessing the quality of steady-state visual-evoked potentials for moving humans using a mobile electroencephalogram headset.
Recent advances in mobile electroencephalogram (EEG) systems, featuring non-prep dry electrodes and wireless telemetry, have enabled and promoted the applications of mobile brain-computer interfaces (BCIs) in our daily life. Since the brain may behave differently while people are actively situated in ecologically-valid environments versus highly-controlled laboratory environments, it remains unclear how well the current laboratory-oriented BCI demonstrations can be translated into operational BCIs for users with naturalistic movements. Understanding inherent links between natural human behaviors and brain activities is the key to ensuring the applicability and stability of mobile BCIs. This study aims to assess the quality of steady-state visual-evoked potentials (SSVEPs), which is one of promising channels for functioning BCI systems, recorded using a mobile EEG system under challenging recording conditions, e.g., walking. To systematically explore the effects of walking locomotion on the SSVEPs, this study instructed subjects to stand or walk on a treadmill running at speeds of 1, 2, and 3 mile (s) per hour (MPH) while concurrently perceiving visual flickers (11 and 12 Hz). Empirical results of this study showed that the SSVEP amplitude tended to deteriorate when subjects switched from standing to walking. Such SSVEP suppression could be attributed to the walking locomotion, leading to distinctly deteriorated SSVEP detectability from standing (84.87 ± 13.55%) to walking (1 MPH: 83.03 ± 13.24%, 2 MPH: 79.47 ± 13.53%, and 3 MPH: 75.26 ± 17.89%). These findings not only demonstrated the applicability and limitations of SSVEPs recorded from freely behaving humans in realistic environments, but also provide useful methods and techniques for boosting the translation of the BCI technology from laboratory demonstrations to practical applications
Acquisition of subcortical auditory potentials with around-the-Ear cEEGrid technology in normal and hearing impaired listeners
Even though the principles of recording brain electrical activity remain unchanged since their discovery, their acquisition has seen major improvements. The cEEGrid, a recently developed flex-printed multi-channel sensory array, can be placed around the ear and successfully record well-known cortical electrophysiological potentials such as late auditory evoked potentials (AEPs) or the P300. Due to its fast and easy application as well as its long-lasting signal recording window, the cEEGrid technology offers great potential as a flexible and 'wearable' solution for the acquisition of neural correlates of hearing. Early potentials of auditory processing such as the auditory brainstem response (ABR) are already used in clinical assessment of sensorineural hearing disorders and envelope following responses (EFR) have shown promising results in the diagnosis of suprathreshold hearing deficits. This study evaluates the suitability of the cEEGrid electrode configuration to capture these AEPs. cEEGrid potentials were recorded and compared to cap-EEG potentials for young normal-hearing listeners and older listeners with high-frequency sloping audiograms to assess whether the recordings are adequately sensitive for hearing diagnostics. ABRs were elicited by presenting clicks (70 and 100-dB peSPL) and stimulation for the EFRs consisted of 120 Hz amplitudemodulated white noise carriers presented at 70-dB SPL. Data from nine bipolar cEEGrid channels and one classical cap-EEG montage (earlobes to vertex) were analysed and outcome measures were compared. Results show that the cEEGrid is able to record ABRs and EFRs with comparable shape to those recorded using a conventional capEEG recording montage and the same amplifier. Signal strength is lower but can still produce responses above the individual neural electrophysiological noise floor. This study shows that the application of the cEEGrid can be extended to the acquisition of early auditory evoked potentials
Neural population coding: combining insights from microscopic and mass signals
Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior
Individual differences in supra-threshold auditory perception - mechanisms and objective correlates
Thesis (Ph.D.)--Boston UniversityTo extract content and meaning from a single source of sound in a quiet background, the auditory system can use a small subset of a very redundant set of spectral and temporal features. In stark contrast, communication in a complex, crowded scene places enormous demands on the auditory system. Spectrotemporal overlap between sounds reduces modulations in the signals at the ears and causes masking, with problems exacerbated by reverberation. Consistent with this idea, many patients seeking audiological treatment seek help precisely because they notice difficulties in environments requiring auditory selective attention. In the laboratory, even listeners with normal hearing thresholds exhibit vast differences in the ability to selectively attend to a target. Understanding the mechanisms causing these supra-threshold differences, the focus of this thesis, may enable research that leads to advances in treating communication disorders that affect an estimated one in five Americans.
Converging evidence from human and animal studies points to one potential source of these individual differences: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Electrophysiological measures of sound encoding by the auditory brainstem in humans and animals support the idea that the temporal precision of the early auditory neural representation can be poor even when hearing thresholds are normal. Concomitantly, animal studies show that noise exposure and early aging can cause a loss (cochlear neuropathy) of a large percentage of the afferent population of auditory nerve fibers innervating the cochlear hair cells without any significant change in measured audiograms.
Using behavioral, otoacoustic and electrophysiological measures in conjunction with computational models of sound processing by the auditory periphery and brainstem, a detailed examination of temporal coding of supra-threshold sound is carried out, focusing on characterizing and understanding individual differences in listeners with normal hearing thresholds and normal cochlear mechanical function. Results support the hypothesis that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests as deficits both behaviorally and in subcortical electrophysiological measures in humans. Based on these results, electrophysiological measures are developed that may yield sensitive, fast, objective measures of supra-threshold coding deficits that arise as a result of cochlear neuropathy
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Data-driven multivariate and multiscale methods for brain computer interface
This thesis focuses on the development of data-driven multivariate and multiscale methods
for brain computer interface (BCI) systems. The electroencephalogram (EEG), the
most convenient means to measure neurophysiological activity due to its noninvasive nature,
is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its
multichannel recording nature require a new set of data-driven multivariate techniques to
estimate more accurately features for enhanced BCI operation. Also, a long term goal
is to enable an alternative EEG recording strategy for achieving long-term and portable
monitoring.
Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully
data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary
EEG signal into a set of components which are highly localised in time and frequency. It
is shown that the complex and multivariate extensions of EMD, which can exploit common
oscillatory modes within multivariate (multichannel) data, can be used to accurately
estimate and compare the amplitude and phase information among multiple sources, a
key for the feature extraction of BCI system. A complex extension of local mean decomposition
is also introduced and its operation is illustrated on two channel neuronal
spike streams. Common spatial pattern (CSP), a standard feature extraction technique
for BCI application, is also extended to complex domain using the augmented complex
statistics. Depending on the circularity/noncircularity of a complex signal, one of the
complex CSP algorithms can be chosen to produce the best classification performance
between two different EEG classes.
Using these complex and multivariate algorithms, two cognitive brain studies are
investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user
attention to a sound source among a mixture of sound stimuli, which is aimed at improving
the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments
elicited by taste and taste recall are examined to determine the pleasure and displeasure
of a food for the implementation of affective computing. The separation between two
emotional responses is examined using real and complex-valued common spatial pattern
methods.
Finally, we introduce a novel approach to brain monitoring based on EEG recordings
from within the ear canal, embedded on a custom made hearing aid earplug. The new
platform promises the possibility of both short- and long-term continuous use for standard
brain monitoring and interfacing applications
- …