101 research outputs found

    A selective learning method to improve the generalization of multilayer feedforward neural networks.

    Get PDF
    Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities.Publicad

    A first attempt at constructing genetic programming expressions for EEG classification

    Get PDF
    Proceeding of: 15th International Conference on Artificial Neural Networks ICANN 2005, Poland, 11-15 September, 2005In BCI (Brain Computer Interface) research, the classification of EEG signals is a domain where raw data has to undergo some preprocessing, so that the right attributes for classification are obtained. Several transformational techniques have been used for this purpose: Principal Component Analysis, the Adaptive Autoregressive Model, FFT or Wavelet Transforms, etc. However, it would be useful to automatically build significant attributes appropriate for each particular problem. In this paper, we use Genetic Programming to evolve projections that translate EEG data into a new vectorial space (coordinates of this space being the new attributes), where projected data can be more easily classified. Although our method is applied here in a straightforward way to check for feasibility, it has achieved reasonable classification results that are comparable to those obtained by other state of the art algorithms. In the future, we expect that by choosing carefully primitive functions, Genetic Programming will be able to give original results that cannot be matched by other machine learning classification algorithms.Publicad

    Optimizing linear and quadratic data transformations for classification tasks

    Get PDF
    Proceeding of: Ninth International Conference on Intelligent Systems Design and Applications, 2009. ISDA '09. Nov. 30 2009-Dec. 2, 2009Many classification algorithms use the concept of distance or similarity between patterns. Previous work has shown that it is advantageous to optimize general Euclidean distances (GED). In this paper, we optimize data transformations, which is equivalent to searching for GEDs, but can be applied to any learning algorithm, even if it does not use distances explicitly. Two optimization techniques have been used: a simple Local Search (LS) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). CMA-ES is an advanced evolutionary method for optimization in difficult continuous domains. Both diagonal and complete matrices have been considered. The method has also been extended to a quadratic non-linear transformation. Results show that in general, the transformation methods described here either outperform or match the classifier working on the original data.This work has been funded by the Spanish Ministry of Science under contract TIN2008-06491-C04-03 (MSTAR project)Publicad

    Using a Mahalanobis-like distance to train Radial Basis Neural Networks

    Get PDF
    Proceeding of: International Work-Conference on Artificial Neural Networks (IWANN 2005)Radial Basis Neural Networks (RBNN) can approximate any regular function and have a faster training phase than other similar neural networks. However, the activation of each neuron depends on the euclidean distance between a pattern and the neuron center. Therefore, the activation function is symmetrical and all attributes are considered equally relevant. This could be solved by altering the metric used in the activation function (i.e. using non-symmetrical metrics). The Mahalanobis distance is such a metric, that takes into account the variability of the attributes and their correlations. However, this distance is computed directly from the variance-covariance matrix and does not consider the accuracy of the learning algorithm. In this paper, we propose to use a generalized euclidean metric, following the Mahalanobis structure, but evolved by a Genetic Algorithm (GA). This GA searches for the distance matrix that minimizes the error produced by a fixed RBNN. Our approach has been tested on two domains and positive results have been observed in both cases

    Evolving generalized euclidean distances for training RBNN

    Get PDF
    In Radial Basis Neural Networks (RBNN), the activation of each neuron depends on the Euclidean distance between a pattern and the neuron center. Such a symmetrical activation assumes that all attributes are equally relevant, which might not be true. Non-symmetrical distances like Mahalanobis can be used. However, this distance is computed directly from the data covariance matrix and therefore the accuracy of the learning algorithm is not taken into account. In this paper, we propose to use a Genetic Algorithm to search for a generalized Euclidean distance matrix, that minimizes the error produced by a RBNN.Publicad

    Evolving linear transformations with a rotation-angles/scaling representation

    Get PDF
    Similarity between patterns is commonly used in many distance-based classification algorithms like KNN or RBF. Generalized Euclidean Distances (GED) can be optimized in order to improve the classification success rate in distance-based algorithms. This idea can be extended to any classification algorithm, because it can be shown that a GEDs is equivalent to a linear transformations of the dataset. In this paper, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is applied to the optimization of linear transformations represented as matrices. The method has been tested on several domains and results show that the classification success rate can be improved for some of them. However, in some domains, diagonal matrices get higher accuracies than full square ones. In order to solve this problem, we propose in the second part of the paper to represent linear transformations by means of rotation angles and scaling factors, based on the Singular Value Decomposition theorem (SVD). This new representation solves the problems found in the former part.This article has been financed by the Spanish founded research MCINN project MSTAR::UC3M, Ref:TIN2008-06491-C04-03, and by project A3::UAM, Ref:TIN2007-66862-C02–02.Publicad

    Applying evolution strategies to preprocessing EEG signals for brain–computer interfaces

    Get PDF
    An appropriate preprocessing of EEG signals is crucial to get high classification accuracy for Brain–Computer Interfaces (BCI). The raw EEG data are continuous signals in the time-domain that can be transformed by means of filters. Among them, spatial filters and selecting the most appropriate frequency-bands in the frequency domain are known to improve classification accuracy. However, because of the high variability among users, the filters must be properly adjusted to every user’s data before competitive results can be obtained. In this paper we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for automatically tuning the filters. Spatial and frequency-selection filters are evolved to minimize both classification error and the number of frequency bands used. This evolutionary approach to filter optimization has been tested on data for different users from the BCI-III competition. The evolved filters provide higher accuracy than approaches used in the competition. Results are also consistent across different runs of CMA-ES.This work has been funded by the Spanish Ministry of Science under Contract TIN2008-06491-C04-03 (MSTAR project) and TIN2011-28336 (MOVES project).Publicad

    Evolving spatial and frequency selection filters for brain-computer interfaces

    Get PDF
    Proceeding of: 2010 IEEE World Congress in Computational Intelligence (WCCI 2010), Barcelona, Spain, July 18-23, 2010Abstract—Machine Learning techniques are routinely applied to Brain Computer Interfaces in order to learn a classiïŹer for a particular user. However, research has shown that classiffication techniques perform better if the EEG signal is previously preprocessed to provide high quality attributes to the classiïŹer. Spatial and frequency-selection ïŹlters can be applied for this purpose. In this paper, we propose to automatically optimize these ïŹlters by means of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The technique has been tested on data from the BCI-III competition, because both raw and manually ïŹltered datasets were supplied, allowing to compare them. Results show that the CMA-ES is able to obtain higher accuracies than the datasets preprocessed by manually tuned ïŹlters.This work has been funded by the Spanish Ministry of Science under contract TIN2008-06491-C04-03 (MSTAR project)Publicad

    Correcting and improving imitation models of humans for Robosoccer agents

    Get PDF
    Proceeding of: 2005 IEEE Congress on Evolutionary Computation (CEC'05),Edimburgo, 2-5 Sept. 2005The Robosoccer simulator is a challenging environment, where a human introduces a team of agents into a football virtual environment. Typically, agents are programmed by hand, but it would be a great advantage to transfer human experience into football agents. The first aim of this paper is to use machine learning techniques to obtain models of humans playing Robosoccer. These models can be used later to control a Robosoccer agent. However, models did not play as smoothly and optimally as the human. To solve this problem, the second goal of this paper is to incrementally correct models by means of evolutionary techniques, and to adapt them against more difficult opponents than the ones beatable by the human.Publicad

    Evolving Generalized Euclidean Distances for Training RBNN

    Get PDF
    In Radial Basis Neural Networks (RBNN), the activation of each neuron depends on the Euclidean distance between a pattern and the neuron center. Such a symmetrical activation assumes that all attributes are equally relevant, which might not be true. Non-symmetrical distances like Mahalanobis can be used. However, this distance is computed directly from the data covariance matrix and therefore the accuracy of the learning algorithm is not taken into account. In this paper, we propose to use a Genetic Algorithm to search for a generalized Euclidean distance matrix, that minimizes the error produced by a RBNN
    • 

    corecore