11 research outputs found

    Dimensionality reduction and classification of time embedded EEG signals

    Get PDF
    Department Head: L. Darrell Whitley.2007 Summer.Includes bibliographical references (pages 49-51).Electroencephalogram (EEG) is the measurement of the electrical activity of the brain measured by placing electrodes on the scalp. These EEG signals give the micro-voltage difference between different parts of the brain in a non-invasive manner. The brain activity measured in this way is being currently analyzed for a possible diagnosis of physiological and psychiatric diseases. These signals have also found a way into cognitive research. At Colorado State University we are trying to investigate the use of EEG as computer input. In this particular research our goal is to classify two mental tasks. A subject is asked to think about a mental task and the EEG signals are measured using six electrodes on his scalp. In order to differentiate between two different tasks, the EEG signals produced by each task need to be classified. We hypothesize that a bottleneck neural network would help us to classify EEG data much better than classification techniques like Linear Discriminant Analysis(LDA), Quadratic Discriminant Analysis (QDA), and Support Vector Machines. A five layer bottleneck neural network is trained using a fast convergence algorithm (variation of Levenberg-Marquardt algorithm) and Scaled Conjugate Gradient (SCG). Classification is compared between a neural network, LDA, QDA and SVM for both raw EEG data as well as bottleneck layer output. Results indicate that QDA and SVM do better classification of raw EEG data without a bottleneck network. QDA and SVM always achieved higher classification accuracy than the neural network with a bottleneck layer in all our experiments. Neural network was able to achieve its best classification accuracy of 92% of test samples correctly classified, whereas QDA achieved 100% accuracy in classifying the test data

    The challenge of face recognition from digital point-and-shoot cameras

    Full text link
    Inexpensive “point-and-shoot ” camera technology has combined with social network technology to give the gen-eral population a motivation to use face recognition tech-nology. Users expect a lot; they want to snap pictures, shoot videos, upload, and have their friends, family and acquain-tances more-or-less automatically recognized. Despite the apparent simplicity of the problem, face recognition in this context is hard. Roughly speaking, failure rates in the 4 to 8 out of 10 range are common. In contrast, error rates drop to roughly 1 in 1,000 for well controlled imagery. To spur advancement in face and person recognition this pa-per introduces the Point-and-Shoot Face Recognition Chal-lenge (PaSC). The challenge includes 9,376 still images of 293 people balanced with respect to distance to the cam-era, alternative sensors, frontal versus not-frontal views, and varying location. There are also 2,802 videos for 265 people: a subset of the 293. Verification results are pre-sented for public baseline algorithms and a commercial al-gorithm for three cases: comparing still images to still im-ages, videos to videos, and still images to videos. 1

    Face detection using correlation filters

    Get PDF
    2013 Fall.Includes bibliographical references.Cameras are ubiquitous and available all around us. As a result, images and videos are posted online in huge numbers. These images often need to be stored and analyzed. This requires the use of various computer vision applications that includes detection of human faces in these images and videos. The emphasis on face detection is evident from the applications found in everyday point and shoot cameras for a better focus, on social networking sites for tagging friends and family and for security situations which subsequently require face recognition or verification. This thesis focuses on detecting human faces in still images and video frames using correlation filters. These correlation filters are trained using a recent technique called Minimum Output Sum of Squared Error (MOSSE) developed by Bolme et al. Since correlation filters identify only a peak location, it only helps in localizing a single target point. In this thesis, I develop techniques to use this localization for detection of human faces of different scales and poses in uncontrolled background, location and lighting conditions. The goal of this research is to extend correlation filters for face detection and identify the scenarios where its potential is the most. The specific contributions of this work are the development of a novel face detector using correlation filters and the identification of the strengths and weaknesses of this approach. This approach is applied to an easy dataset and a hard dataset to emphasize the efficacy of correlations filters for face detection. This technique shows 95.6% accuracy in finding the exact location of the faces in images with controlled background and lighting. Although, the results on a hard dataset were not better than the OpenCV Viola and Jones face detector, it showed much better results, 81.5% detection rate compared to 69.43% detection rate by the Viola and Jones face detector, when tested on a customized dataset that was controlled for location change between training and test datasets. This result signifies the strength of a correlation based face detector in a specific scenario with uniform setting, such as a building entrance or an airport security gate

    Dimensionality Reduction Using Neural Networks

    No full text
    Dimensionality reduction is a method of obtaining the information from a high dimensional feature space using fewer intrinsic dimensions. Reducing dimensionality of high dimensional data is good for better classification, regression, presentation and visualization of data. By representing data in the lower dimensional space, most of the time we don’t lose much information that matter

    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON COMPUTERS Resource Allocation in a Client/Server System for Massive Multi-

    No full text
    The creation of a Massive Multi-Player On-line Game (MMOG) has significant costs, such as maintenance of server rooms, server administration, and customer service. The capacity of servers in a client/server MMOG is hard to scale and cannot adjust quickly to peaks in demand while maintaining the required response time. To handle these peaks in demand, we propose to employ users ’ computers as secondary servers. The introduction of users ’ computers as secondary servers allows the performance of the MMOG to support an increase in users. Here, we consider two cases. First, for the minimization of the response times from the server, we develop and implement five static heuristics to implement a secondary server scheme that reduces the time taken to compute the state of the MMOG. Second, for our study on fairness, the goal of the heuristics is to provide a “fair ” environment for all the users (in terms of similar response times), and to be “robust ” against the uncertainty of the number of new players that may join a given system configuration. The number of heterogeneous secondary servers, conversion of a player to a secondary server, and assignment of players to secondary servers are determined by the heuristics implemented in this study. I
    corecore