7 research outputs found
Meta learning of bounds on the Bayes classifier error
Meta learning uses information from base learners (e.g. classifiers or
estimators) as well as information about the learning problem to improve upon
the performance of a single base learner. For example, the Bayes error rate of
a given feature space, if known, can be used to aid in choosing a classifier,
as well as in feature selection and model selection for the base classifiers
and the meta classifier. Recent work in the field of f-divergence functional
estimation has led to the development of simple and rapidly converging
estimators that can be used to estimate various bounds on the Bayes error. We
estimate multiple bounds on the Bayes error using an estimator that applies
meta learning to slowly converging plug-in estimators to obtain the parametric
convergence rate. We compare the estimated bounds empirically on simulated data
and then estimate the tighter bounds on features extracted from an image patch
analysis of sunspot continuum and magnetogram images.Comment: 6 pages, 3 figures, to appear in proceedings of 2015 IEEE Signal
Processing and SP Education Worksho
An SSVEP Brain-Computer Interface: A Machine Learning Approach
A Brain-Computer Interface (BCI) provides a bidirectional communication path for a human to control an external device using brain signals. Among neurophysiological features in BCI systems, steady state visually evoked potentials (SSVEP), natural responses to visual stimulation at specific frequencies, has increasingly drawn attentions because of its high temporal resolution and minimal user training, which are two important parameters in evaluating a BCI system. The performance of a BCI can be improved by a properly selected neurophysiological signal, or by the introduction of machine learning techniques. With the help of machine learning methods, a BCI system can adapt to the user automatically.
In this work, a machine learning approach is introduced to the design of an SSVEP based BCI. The following open problems have been explored:
1. Finding a waveform with high success rate of eliciting SSVEP. SSVEP belongs to the evoked potentials, which require stimulations. By comparing square wave, triangle wave and sine wave light signals and their corresponding SSVEP, it was observed that square waves with 50% duty cycle have a significantly higher success rate of eliciting SSVEPs than either sine or triangle stimuli.
2. The resolution of dual stimuli that elicits consistent SSVEP. Previous studies show that the frequency bandwidth of an SSVEP stimulus is limited. Hence it affects the performance of the whole system. A dual-stimulus, the overlay of two distinctive single frequency stimuli, can potentially expand the number of valid SSVEP stimuli. However, the improvement depends on the resolution of the dual stimuli. Our experimental results shothat 4 Hz is the minimum difference between two frequencies in a dual-stimulus that elicits consistent SSVEP.
3. Stimuli and color-space decomposition. It is known in the literature that although low-frequency stimuli (\u3c30 Hz) elicit strong SSVEP, they may cause dizziness. In this work, we explored the design of a visually friendly stimulus from the perspective of color-space decomposition. In particular, a stimulus was designed with a fixed luminance component and variations in the other two dimensions in the HSL (Hue, Saturation, Luminance) color-space. Our results shothat the change of color alone evokes SSVEP, and the embedded frequencies in stimuli affect the harmonics. Also, subjects claimed that a fixed luminance eases the feeling of dizziness caused by low frequency flashing objects.
4. Machine learning techniques have been applied to make a BCI adaptive to individuals. An SSVEP-based BCI brings new requirements to machine learning. Because of the non-stationarity of the brain signal, a classifier should adapt to the time-varying statistical characters of a single user\u27s brain wave in realtime. In this work, the potential function classifier is proposed to address this requirement, and achieves 38.2bits/min on offline EEG data
Graph-based Estimation of Information Divergence Functions
abstract: Information divergence functions, such as the Kullback-Leibler divergence or the Hellinger distance, play a critical role in statistical signal processing and information theory; however estimating them can be challenge. Most often, parametric assumptions are made about the two distributions to estimate the divergence of interest. In cases where no parametric model fits the data, non-parametric density estimation is used. In statistical signal processing applications, Gaussianity is usually assumed since closed-form expressions for common divergence measures have been derived for this family of distributions. Parametric assumptions are preferred when it is known that the data follows the model, however this is rarely the case in real-word scenarios. Non-parametric density estimators are characterized by a very large number of parameters that have to be tuned with costly cross-validation. In this dissertation we focus on a specific family of non-parametric estimators, called direct estimators, that bypass density estimation completely and directly estimate the quantity of interest from the data. We introduce a new divergence measure, the -divergence, that can be estimated directly from samples without parametric assumptions on the distribution. We show that the -divergence bounds the binary, cross-domain, and multi-class Bayes error rates and, in certain cases, provides provably tighter bounds than the Hellinger divergence. In addition, we also propose a new methodology that allows the experimenter to construct direct estimators for existing divergence measures or to construct new divergence measures with custom properties that are tailored to the application. To examine the practical efficacy of these new methods, we evaluate them in a statistical learning framework on a series of real-world data science problems involving speech-based monitoring of neuro-motor disorders.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Algorithms for Multiclass Classification and Regularized Regression
Multiclass classification and regularized regression problems are very common in modern statistical and machine learning applications. On the one hand, multiclass classification problems require the prediction of class labels: given observations of objects that belong to certain classes, can we predict to which class a new object belongs? On the other hand, the reg