833 research outputs found

    Lazy learning in radial basis neural networks: A way of achieving more accurate models

    Get PDF
    Radial Basis Neural Networks have been successfully used in a large number of applications having in its rapid convergence time one of its most important advantages. However, the level of generalization is usually poor and very dependent on the quality of the training data because some of the training patterns can be redundant or irrelevant. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be approximated. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains an artificial regression problem and two time series prediction problems. Results have been compared to standard training method using the complete training data set and the new method shows better generalization abilities.Publicad

    How the selection of training patterns can improve the generalization capability in Radial Basis Neural Networks

    Get PDF
    It has been shown that the selection of the most similar training patterns to generalize a new sample can improve the generalization capability of Radial Basis Neural Networks. In previous works, authors have proposed a learning method that automatically selects the most appropriate training patterns for the new sample to be predicted. However, the amount of selected patterns or the neighborhood choice around the new sample might influence in the generalization accuracy. In addition, that neighborhood must be established according to the dimensionality of the input patterns. This work handles these aspects and presents an extension of a previous work of the authors in order to take those subjects into account. A real time-series prediction problem has been chosen in order to validate the selective learning method for a n-dimensional problem.Publicad

    Time series forecasting by means of evolutionary algorithms

    Get PDF
    IEEE International Parallel and Distributed Processing Symposium. Long Beach, CA, 26-30 March 2007Many physical and artificial phenomena can be described by time series. The prediction of such phenomenon could be as complex as interesting. There are many time series forecasting methods, but most of them only look for general rules to predict the whole series. The main problem is that time series usually have local behaviours that don't allow forecasting the time series by general rules. In this paper, a new method for finding local prediction rules is presented. Those local prediction rules can attain a better general prediction accuracy. The method presented in this paper is based on the evolution of a rule system encoded following a Michigan approach. For testing this method, several time series domains have been used: a widely known artificial one, the Mackey-Glass time series, and two real world ones, the Venice Lagon and the sunspot time series

    A selective learning method to improve the generalization of multilayer feedforward neural networks.

    Get PDF
    Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities.Publicad

    A first attempt at constructing genetic programming expressions for EEG classification

    Get PDF
    Proceeding of: 15th International Conference on Artificial Neural Networks ICANN 2005, Poland, 11-15 September, 2005In BCI (Brain Computer Interface) research, the classification of EEG signals is a domain where raw data has to undergo some preprocessing, so that the right attributes for classification are obtained. Several transformational techniques have been used for this purpose: Principal Component Analysis, the Adaptive Autoregressive Model, FFT or Wavelet Transforms, etc. However, it would be useful to automatically build significant attributes appropriate for each particular problem. In this paper, we use Genetic Programming to evolve projections that translate EEG data into a new vectorial space (coordinates of this space being the new attributes), where projected data can be more easily classified. Although our method is applied here in a straightforward way to check for feasibility, it has achieved reasonable classification results that are comparable to those obtained by other state of the art algorithms. In the future, we expect that by choosing carefully primitive functions, Genetic Programming will be able to give original results that cannot be matched by other machine learning classification algorithms.Publicad

    Optimizing linear and quadratic data transformations for classification tasks

    Get PDF
    Proceeding of: Ninth International Conference on Intelligent Systems Design and Applications, 2009. ISDA '09. Nov. 30 2009-Dec. 2, 2009Many classification algorithms use the concept of distance or similarity between patterns. Previous work has shown that it is advantageous to optimize general Euclidean distances (GED). In this paper, we optimize data transformations, which is equivalent to searching for GEDs, but can be applied to any learning algorithm, even if it does not use distances explicitly. Two optimization techniques have been used: a simple Local Search (LS) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). CMA-ES is an advanced evolutionary method for optimization in difficult continuous domains. Both diagonal and complete matrices have been considered. The method has also been extended to a quadratic non-linear transformation. Results show that in general, the transformation methods described here either outperform or match the classifier working on the original data.This work has been funded by the Spanish Ministry of Science under contract TIN2008-06491-C04-03 (MSTAR project)Publicad

    Sistema Multiagente para el Diseño de Redes de Neuronas de Base Radial Óptimas

    Get PDF
    Las Redes de Neuronas de Base Radial (RNBR) se comportan muy bien en la aproximación de funciones, siendo su convergencia extremadamente rápida comparada con las redes de neuronas de tipo perceptrón multicapa. Sin embargo, el diseño de una RNBR para resolver un problema dado, no es sencillo ni inmediato, siendo el número de neuronas de la capa oculta de una Red de Base Radial (RBR) un factor crítico en el comportamiento de este tipo de redes. En este trabajo, el diseño de una RNBR está basado en la cooperación de n+1 agentes: n agentes, cada uno de ellos formado por una RBR, que denominamos agentes RBR y 1 agente árbitro. Estos n+1 agentes se organizan formando un Sistema Multiagente. El proceso de entrenamiento se distribuye entre los n agentes RBR, cada uno de los cuales tiene un número diferente de neuronas. Cada agente RBR se entrena durante un número determinado de ciclos (una etapa), cuando el agente árbitro le envía un mensaje. Todo el proceso está gobernado por el árbitro que decide cuál es el mejor agente RBR en cada etapa. El resultado de los experimentos muestra una importante reducción del número de ciclos de entrenamiento utilizando la estrategia multiagente propuesta, en lugar de una estrategia secuencial.Publicad

    Evolving spatial and frequency selection filters for brain-computer interfaces

    Get PDF
    Proceeding of: 2010 IEEE World Congress in Computational Intelligence (WCCI 2010), Barcelona, Spain, July 18-23, 2010Abstract—Machine Learning techniques are routinely applied to Brain Computer Interfaces in order to learn a classifier for a particular user. However, research has shown that classiffication techniques perform better if the EEG signal is previously preprocessed to provide high quality attributes to the classifier. Spatial and frequency-selection filters can be applied for this purpose. In this paper, we propose to automatically optimize these filters by means of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The technique has been tested on data from the BCI-III competition, because both raw and manually filtered datasets were supplied, allowing to compare them. Results show that the CMA-ES is able to obtain higher accuracies than the datasets preprocessed by manually tuned filters.This work has been funded by the Spanish Ministry of Science under contract TIN2008-06491-C04-03 (MSTAR project)Publicad

    Applying evolution strategies to preprocessing EEG signals for brain–computer interfaces

    Get PDF
    An appropriate preprocessing of EEG signals is crucial to get high classification accuracy for Brain–Computer Interfaces (BCI). The raw EEG data are continuous signals in the time-domain that can be transformed by means of filters. Among them, spatial filters and selecting the most appropriate frequency-bands in the frequency domain are known to improve classification accuracy. However, because of the high variability among users, the filters must be properly adjusted to every user’s data before competitive results can be obtained. In this paper we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for automatically tuning the filters. Spatial and frequency-selection filters are evolved to minimize both classification error and the number of frequency bands used. This evolutionary approach to filter optimization has been tested on data for different users from the BCI-III competition. The evolved filters provide higher accuracy than approaches used in the competition. Results are also consistent across different runs of CMA-ES.This work has been funded by the Spanish Ministry of Science under Contract TIN2008-06491-C04-03 (MSTAR project) and TIN2011-28336 (MOVES project).Publicad

    Correcting and improving imitation models of humans for Robosoccer agents

    Get PDF
    Proceeding of: 2005 IEEE Congress on Evolutionary Computation (CEC'05),Edimburgo, 2-5 Sept. 2005The Robosoccer simulator is a challenging environment, where a human introduces a team of agents into a football virtual environment. Typically, agents are programmed by hand, but it would be a great advantage to transfer human experience into football agents. The first aim of this paper is to use machine learning techniques to obtain models of humans playing Robosoccer. These models can be used later to control a Robosoccer agent. However, models did not play as smoothly and optimally as the human. To solve this problem, the second goal of this paper is to incrementally correct models by means of evolutionary techniques, and to adapt them against more difficult opponents than the ones beatable by the human.Publicad
    corecore