4 research outputs found

    Novel Approaches in Classification Error Estimation, Predicting Generalization in Deep Learning, and Hybrid Compartmental Models

    Get PDF
    In data-poor environments, it may not be possible to set aside a large enough test data set to produce accurate test-set error estimates. On the other hand, in modern classification applications where training is time and resource intensive, as when training deep neural networks, classification error estimators based on resampling, such as cross-validation and bootstrap, are too computationally expensive, since they require training tens or hundreds of classifiers on resampled versions of the training data. The alternative in this case is to train and test on the same data, without resampling, i.e., to use resubstitution-like error estimators. Here, a family of generalized resubstitution classifier error estimators are proposed and their performance in various scenarios is investigated. This family of error estimators is based on empirical measures. The plain resubstitution error estimator corresponds to choosing the standard empirical measure that puts equal probability mass over each training points. Other choices of empirical measure lead to bolstered resubstitution, posterior-probability, Bayesian error estimators, as well as the newly proposed bolstered posterior-probability error estimators. Empirical results of this dissertation suggest that the generalized resubstitution error estimators are particularly useful in the presence of small sample size for various classification rules. In particular, bolstering led to remarkable improvement in error estimation in the majority of experiments on traditional classifiers as well as modern deep neural networks. Bolstering is a type of data augmentation that systematically generates meaningful samples, primarily through data-driven bolstering parameters. The bolstering parameter for low to average dimensional data was defined based on the Euclidean distance between samples in each class. But Euclidean distance between images is not straightforward and semantically meaningful. Hence, for experiments with image data, parameters of data augmentation were selected in a different fashion. I introduce three approaches to image augmentation, among which weighted augmented data combined with the posterior probability was most effective in predicting the generalization gap in deep learning. For the study of protein turn over, I propose hybrid compartmental models (HCM), that are useful for multi-substrate experiments. Unlike the conventional compartmental models, HCM starts with a partially specified structure for tracer models, estimates the tracer parameters given the data, and finally determines the details of model’s structure by choosing the most physiologically meaningful tracee model among the resulting alternative tracee models. The parameters in the alternatives tracee models are computed by simple mathematical operations on tracer parameters. The proposed HCM was employed to estimate kinetics of Phenylalanine and Tyrosine using tracer-tracee-ratio (TTR) data. Results show that HCM tracer model was able to fit the TTR-time data points, and the best tracee model was selected by comparing the alternative tracee models’ parameters with those reported in the literature

    Robust Understanding of Motor Imagery EEG Pattern in Voice Controlled Prostatic Arm Design

    Get PDF
    poster abstractIntroduction: Understanding neural mechanism of communication between human and machine has become more interesting research issue in last few decades. One of the most motivating purposes is to help the people with motor disabilities. This excites researchers to work on the interaction between brain-computer-interfacing (BCI) systems, which in turn needs a fast and accurate algorithm to decode the commands in the brain or electroencephalogram (EEG) signals. EEG signals are very noisy and contain several types of artifacts, so it would be very important to use efficient methods to train the BCI system. Aims and Goals: The goal of this project is to train an intelligent system based on the information in the sample EEG data. This system is going to predict the person’s intention in future experiments with new EEG data. Finally, this project can be used in controlling a moving object like a robot, a wheelchair, or many other devices. Data Acquisition and methods: In this project, we are working with the EEG signals taken from 20 subjects thinking about English vowels \a\, \e\, \i\, \o\, and \u\. This means we can define only 5 clusters, which contain all signals with similar features. We are going to use part of the signals for training and the rest for testing. In training section, we have to first preprocess the data, and then categorize it into 5 clusters. Robust Principle Component Analysis (PCA) helps us to analyze the data to extract the features. Afterwards based on principle component features of signals, we employ a Hidden Markov Model (HMM) classifier to send similar signals to the same cluster. As EEG data is a randomly variant signal, we are using Hybrid HMM classifier for classification of EEG pattern. Our Initial results are promising in robust understanding of auditory command, which is been explored from EEG pattern analysis

    Silent speech recognition in EEG-based Brain Computer Interface

    Get PDF
    A Brain Computer Interface (BCI) is a hardware and software system that establishes direct communication between brain and the environment. In a BCI system, brain messages pass through wires and external computers instead of the normal pathway of nerves and muscles. General workflow in all BCIs is to measure brain activities, process and then convert them into an output readable for a computer. The measurement of electrical activities in different parts of the brain is called electroencephalography (EEG). There are lots of sensor technologies with different number of electrodes to record brain activities along the scalp. Each of these electrodes captures a weighted sum of activities of all neurons in the area around that electrode. In order to establish a BCI system, it is needed to set a bunch of electrodes on scalp, and a tool to send the signals to a computer for training a system that can find the important information, extract them from the raw signal, and use them to recognize the user\u27s intention. After all, a control signal should be generated based on the application. This thesis describes the step by step training and testing a BCI system that can be used for a person who has lost speaking skills through an accident or surgery, but still has healthy brain tissues. The goal is to establish an algorithm, which recognizes different vowels from EEG signals. It considers a bandpass filter to remove signals\u27 noise and artifacts, periodogram for feature extraction, and Support Vector Machine (SVM) for classification
    corecore