396 research outputs found

    Knowledge representation issues in control knowledge learning

    Get PDF
    Seventeenth International Conference on Machine Learning. Stanford, CA, USA, 29 June-2 July, 2000Knowledge representation is a key issue for any machine learning task. There have already been many comparative studies about knowledge representation with respect to machine learning in classication tasks. However, apart from some work done on reinforcement learning techniques in relation to state representation, very few studies have concentrated on the eect of knowledge representation for machine learning applied to problem solving, and more specically, to planning. In this paper, we present an experimental comparative study of the eect of changing the input representation of planning domain knowledge on control knowledge learning. We show results in two classical domains using three dierent machine learning systems, that have previously shown their eectiveness on learning planning control knowledge: a pure ebl mechanism, a combination of ebl and induction (hamlet), and a Genetic Programming based system (evock).Publicad

    Learning to solve planning problems efficiently by means of genetic programming

    Get PDF
    Declarative problem solving, such as planning, poses interesting challenges for Genetic Programming (GP). There have been recent attempts to apply GP to planning that fit two approaches: (a) using GP to search in plan space or (b) to evolve a planner. In this article, we propose to evolve only the heuristics to make a particular planner more efficient. This approach is more feasible than (b) because it does not have to build a planner from scratch but can take advantage of already existing planning systems. It is also more efficient than (a) because once the heuristics have been evolved, they can be used to solve a whole class of different planning problems in a planning domain, instead of running GP for every new planning problem. Empirical results show that our approach (EVOCK) is able to evolve heuristics in two planning domains (the blocks world and the logistics domain) that improve PRODIGY4.0 performance. Additionally, we experiment with a new genetic operator - Instance-Based Crossover - that is able to use traces of the base planner as raw genetic material to be injected into the evolving population.Publicad

    Immediate transfer of global improvements to all individuals in a population compared to automatically defined functions for the EVEN-5,6-PARITY problems

    Get PDF
    Proceeding of: First European Workshop, EuroGP’98 Paris, France, April 14–15Koza has shown how automatically defined functions (ADFs) can reduce computational effort in the GP paradigm. In Koza’s ADF, as well as in standard GP, an improvement in a part of a program (an ADF or a main body) can only be transferred via crossover. In this article, we consider whether it is a good idea to transfer immediately improvements found by a single individual to the whole population. A system that implements this idea has been proposed and tested for the EVEN-5-PARITY and EVEN-6-PARITY problems. Results are very encouraging: computational effort is reduced (compared to Koza’s ADFs) and the system seems to be less prone to early stagnation. Finally, our work suggests further research where less extreme approaches to our idea could be tested

    A selective learning method to improve the generalization of multilayer feedforward neural networks.

    Get PDF
    Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities.Publicad

    CEREZO GALÁN, P.: La voluntad de aventura.

    Get PDF
    Sin resume

    A first attempt at constructing genetic programming expressions for EEG classification

    Get PDF
    Proceeding of: 15th International Conference on Artificial Neural Networks ICANN 2005, Poland, 11-15 September, 2005In BCI (Brain Computer Interface) research, the classification of EEG signals is a domain where raw data has to undergo some preprocessing, so that the right attributes for classification are obtained. Several transformational techniques have been used for this purpose: Principal Component Analysis, the Adaptive Autoregressive Model, FFT or Wavelet Transforms, etc. However, it would be useful to automatically build significant attributes appropriate for each particular problem. In this paper, we use Genetic Programming to evolve projections that translate EEG data into a new vectorial space (coordinates of this space being the new attributes), where projected data can be more easily classified. Although our method is applied here in a straightforward way to check for feasibility, it has achieved reasonable classification results that are comparable to those obtained by other state of the art algorithms. In the future, we expect that by choosing carefully primitive functions, Genetic Programming will be able to give original results that cannot be matched by other machine learning classification algorithms.Publicad

    A competence-performance based model to develop a syntactic language for artificial agents

    Get PDF
    The hypothesis of language use is an attractive theory in order to explain how natural languages evolve and develop in social populations. In this paper we present a model partially based on the idea of language games, so that a group of artificial agents are able to produce and share a symbolic language with syntactic structure. Grammatical structure is induced by grammatical evolution of stochastic regular grammars with learning capabilities, while language development is refined by means of language games where the agents apply on-line probabilistic reinforcement learning. Within this framework, the model adapts the concepts of competence and performance in language, as they have been proposed in some linguistic theories. The first experiments in this article have been organized around the linguistic description of visual scenes with the possibility of changing the referential situations. A second and more complicated experimental setting is also analyzed, where linguistic descriptions are enforced to keep word order constraints.The second author has been supported by the Spanish Ministry of Science under contract ENE2014-56126-C2-2-R (AOPRIN-SOL)

    Optimizing linear and quadratic data transformations for classification tasks

    Get PDF
    Proceeding of: Ninth International Conference on Intelligent Systems Design and Applications, 2009. ISDA '09. Nov. 30 2009-Dec. 2, 2009Many classification algorithms use the concept of distance or similarity between patterns. Previous work has shown that it is advantageous to optimize general Euclidean distances (GED). In this paper, we optimize data transformations, which is equivalent to searching for GEDs, but can be applied to any learning algorithm, even if it does not use distances explicitly. Two optimization techniques have been used: a simple Local Search (LS) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). CMA-ES is an advanced evolutionary method for optimization in difficult continuous domains. Both diagonal and complete matrices have been considered. The method has also been extended to a quadratic non-linear transformation. Results show that in general, the transformation methods described here either outperform or match the classifier working on the original data.This work has been funded by the Spanish Ministry of Science under contract TIN2008-06491-C04-03 (MSTAR project)Publicad

    Experimentación en programación genética multinivel

    Get PDF
    La programación genética (PG) es una técnica de aprendizaje automático que se basa en la evolución de programas de ordenador mediante un algoritmo genético. Una versión avanzada de la PG intenta aprovechar las regularidades de los dominios a resolver aprendiendo simultáneamente subrutinas que codifiquen dichas regularidades. Dicha versión, denominada ADF (definición automática de funciones) permite reutilizar una subrutina varias veces dentro de un mismo individuo. Sin embargo, existe la posibilidad de que la misma subrutina pueda ser reaprovechada por varios individuos de la misma población. Existen varios sistemas que, en principio, permiten descubrir subrutinas válidas para muchos individuos de una población. Uno de los más avanzados es el de red dinámica DLGP propuesto por Racine, Schoenauer y Dague en 1998. Desgraciadamente, hasta el momento no existe ninguna evaluación experimental de DLGP. El objetivo de este artículo es presentar una extensa experimentación de algunos aspectos de DLGP y su análisis posterior. Además, los autores pretenden realizar una revisión de las variadas técnicas existentes en PG para evolucionar subrutinas.Genetic Programming is a machine learning technique based on theevolution of computer programs by means of a geneticalgorithm. Currently, there is a lot of work to try to evolve the mainprograms and its subroutines at the same time (ADF: AutomaticDefinition of Functions). This allows to use the same subroutineseveral times inside the same individual. However, it could bepossible that the same subroutine is used by several individuals ofthe same population. There are different systems that could evolvesubroutines with that purpose. One of the most complete is the DynamicLattice (DLGP) proposed by Racine, Schoenauer y Dague in1998. Unfortunately, so far there is no empirical evaluation ofDLGP. The goal of this article is to carry out a deep experimentationabout some aspects of DLGP. Additionally, this article contains anup-to-date survey of the different techniques that can evolvesubroutines for Genetic Programming.Publicad

    Using a Mahalanobis-like distance to train Radial Basis Neural Networks

    Get PDF
    Proceeding of: International Work-Conference on Artificial Neural Networks (IWANN 2005)Radial Basis Neural Networks (RBNN) can approximate any regular function and have a faster training phase than other similar neural networks. However, the activation of each neuron depends on the euclidean distance between a pattern and the neuron center. Therefore, the activation function is symmetrical and all attributes are considered equally relevant. This could be solved by altering the metric used in the activation function (i.e. using non-symmetrical metrics). The Mahalanobis distance is such a metric, that takes into account the variability of the attributes and their correlations. However, this distance is computed directly from the variance-covariance matrix and does not consider the accuracy of the learning algorithm. In this paper, we propose to use a generalized euclidean metric, following the Mahalanobis structure, but evolved by a Genetic Algorithm (GA). This GA searches for the distance matrix that minimizes the error produced by a fixed RBNN. Our approach has been tested on two domains and positive results have been observed in both cases
    corecore