3,753 research outputs found

    Ensemble Learning for Free with Evolutionary Algorithms ?

    Get PDF
    Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles

    Evolutionary artificial neural network based on Chemical Reaction Optimization

    Get PDF
    Evolutionary algorithms (EAs) are very popular tools to design and evolve artificial neural networks (ANNs), especially to train them. These methods have advantages over the conventional backpropagation (BP) method because of their low computational requirement when searching in a large solution space. In this paper, we employ Chemical Reaction Optimization (CRO), a newly developed global optimization method, to replace BP in training neural networks. CRO is a population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction. Simulation results show that CRO outperforms many EA strategies commonly used to train neural networks. © 2011 IEEE.published_or_final_versionThe 2011 IEEE Congress on Evolutionary Computation (CEC 2011), New Orleans, LA., 5-8 June 2011. In Proceedings of CEC 2011, 2011, p. 2083-209

    Explorations of the semantic learning machine neuroevolution algorithm: dynamic training data use and ensemble construction methods

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master’s degree in Data Science and Advanced AnalyticsAs the world’s technology evolves, the power to implement new and more efficient algorithms increases but so does the complexity of the problems at hand. Neuroevolution algorithms fit in this context in the sense that they are able to evolve Artificial Neural Networks (ANNs). The recently proposed Neuroevolution algorithm called Semantic Learning Machine (SLM) has the advantage of searching over unimodal error landscapes in any Supervised Learning task where the error is measured as a distance to the known targets. The absence of local optima in the search space results in a more efficient learning when compared to other neuroevolution algorithms. This work studies how different approaches of dynamically using the training data affect the generalization of the SLM algorithm. Results show that these methods can be useful in offering different alternatives to achieve a superior generalization. These approaches are evaluated experimentally in fifteen real-world binary classification data sets. Across these fifteen data sets, results show that the SLM is able to outperform the Multilayer Perceptron (MLP) in 13 out of the 15 considered problems with statistical significance after parameter tuning was applied to both algorithms. Furthermore, this work also considers how different ensemble construction methods such as a simple averaging approach, Bagging and Boosting affect the resulting generalization of the SLM and MLP algorithms. Results suggest that the stochastic nature of the SLM offers enough diversity to the base learner in a way that a simple averaging method can be competitive when compared to more complex techniques like Bagging and Boosting.À medida que a tecnologia evolui, a possibilidade de implementar algoritmos novos e mais eficientes aumenta, no entanto, a complexidade dos problemas com que nos deparamos também se torna maior. Algoritmos de Neuroevolution encaixam-se neste contexto, na medida em que são capazes de evoluir Artificial Neural Networks (ANNs). O algoritmo de Neuroevolution recentemente proposto chamado Semantic Learning Machine (SLM) tem a vantagem de procurar sobre landscapes de erros unimodais em qualquer problema de Supervised Learning, onde o erro é medido como a distância aos alvos conhecidos. A não existência de local optima no espaço de procura resulta numa aprendizagem mais eficiente quando comparada com outros algoritmos de Neuroevolution. Este trabalho estuda como métodos diferentes de uso dinâmico de dados de treino afeta a generalização do algoritmo SLM. Os resultados mostram que estes métodos são úteis a oferecer uma alternativa que atinge uma generalização competitiva. Estes métodos são testados em quinze problemas reais de classificação binária. Nestes quinze problemas, o algoritmo SLM mostra superioridade ao Multilayer Perceptron (MLP) em treze deles com significância estatística depois de ser aplicado parameter tuning em ambos os algoritmos. Para além disso, este trabalho também considera como diferentes métodos de construção de ensembles, tal como um simples método de averaging, Bagging e Boosting afetam os valores de generalização dos algoritmos SLM e MLP. Os resultados sugerem que a natureza estocástica da SLM oferece diversidade suficiente aos base learners de maneira a que o método mais simples de construção de ensembles se torne competitivo quando comparado com técnicas mais complexas como Bagging e Boosting

    Neuronal assembly dynamics in supervised and unsupervised learning scenarios

    Get PDF
    The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions

    Advancing the Applicability of Reinforcement Learning to Autonomous Control

    Get PDF
    Mit dateneffizientem Reinforcement Learning (RL) konnten beeindruckendeErgebnisse erzielt werden, z.B. für die Regelung von Gasturbinen. In derPraxis erfordert die Anwendung von RL jedoch noch viel manuelle Arbeit, wasbisher RL für die autonome Regelung untauglich erscheinen ließ. Dievorliegende Arbeit adressiert einige der verbleibenden Probleme, insbesonderein Bezug auf die Zuverlässigkeit der Policy-Erstellung. Es werden zunächst RL-Probleme mit diskreten Zustands- und Aktionsräumenbetrachtet. Für solche Probleme wird häufig ein MDP aus Beobachtungengeschätzt, um dann auf Basis dieser MDP-Schätzung eine Policy abzuleiten. DieArbeit beschreibt, wie die Schätzer-Unsicherheit des MDP in diePolicy-Erstellung eingebracht werden kann, um mit diesem Wissen das Risikoeiner schlechten Policy aufgrund einer fehlerhaften MDP-Schätzung zuverringern. Außerdem wird so effiziente Exploration sowie Policy-Bewertungermöglicht. Anschließend wendet sich die Arbeit Problemen mit kontinuierlichenZustandsräumen zu und konzentriert sich auf auf RL-Verfahren, welche aufFitted Q-Iteration (FQI) basieren, insbesondere Neural Fitted Q-Iteration(NFQ). Zwar ist NFQ sehr dateneffizient, jedoch nicht so zuverlässig, wie fürdie autonome Regelung nötig wäre. Die Arbeit schlägt die Verwendung vonEnsembles vor, um die Zuverlässigkeit von NFQ zu erhöhen. Es werden eine Reihevon Möglichkeiten der Ensemble-Nutzung entworfen und evaluiert. Bei allenbetrachteten RL-Problemen sorgen Ensembles für eine zuverlässigere Erstellungguter Policies. Im nächsten Schritt werden Möglichkeiten der Policy-Bewertung beikontinuierlichen Zustandsräumen besprochen. Die Arbeit schlägt vor, FittedPolicy Evaluation (FPE), eine Variante von FQI für Policy Evaluation, mitanderen Regressionsverfahren und/oder anderen Datensätzen zu kombinieren, umein Maß für die Policy-Qualität zu erhalten. Experimente zeigen, dassExtra-Tree-FPE ein realistisches Qualitätsmaß für NFQ-generierte Policies liefernkann. Schließlich kombiniert die Arbeit Ensembles und Policy-Bewertung, um mit sichändernden RL-Problemen umzugehen. Der wesentliche Beitrag ist das EvolvingEnsemble, dessen Policy sich langsam ändert, indem alte, untaugliche Policiesentfernt und neue hinzugefügt werden. Es zeigt sich, dass das EvolvingEnsemble deutlich besser funktioniert als einfachere Ansätze.With data-efficient reinforcement learning (RL) methods impressive resultscould be achieved, e.g., in the context of gas turbine control. However, inpractice the application of RL still requires much human intervention, whichhinders the application of RL to autonomous control. This thesis addressessome of the remaining problems, particularly regarding the reliability of thepolicy generation process. The thesis first discusses RL problems with discrete state and action spaces.In that context, often an MDP is estimated from observations. It is describedhow to incorporate the estimators' uncertainties into the policy generationprocess. This information can then be used to reduce the risk of obtaining apoor policy due to flawed MDP estimates. Moreover, it is discussed how to usethe knowledge of uncertainty for efficient exploration and the assessment ofpolicy quality without requiring the policy's execution. The thesis then moves on to continuous state problems and focuses on methodsbased on fitted Q-iteration (FQI), particularly neural fitted Q-iteration(NFQ). Although NFQ has proven to be very data-efficient, it is not asreliable as required for autonomous control. The thesis proposes to useensembles to increase reliability. Several ways of ensemble usage in an NFQcontext are discussed and evaluated on a number of benchmark domains. It showsthat in all considered domains with ensembles good policies can be producedmore reliably. Next, policy assessment in continuous domains is discussed. The thesisproposes to use fitted policy evaluation (FPE), an adaptation of FQI to policyevaluation, combined with a different function approximator and/or differentdataset to obtain a measure for policy quality. Results of experiments showthat extra-tree FPE, applied to policies generated by NFQ, produces valuefunctions that can well be used to reason about the true policy quality. Finally, the thesis combines ensembles and policy assessment to derive methodsthat can deal with changing environments. The major contribution is theevolving ensemble. The policy of the evolving ensemble changes slowly as newpolicies are added and old policies removed. It turns out that the evolvingensemble approaches work considerably better than simpler approaches likesingle policies learned with recent observations or simple ensembles

    Evolutionary design of deep neural networks

    Get PDF
    Mención Internacional en el título de doctorFor three decades, neuroevolution has applied evolutionary computation to the optimization of the topology of artificial neural networks, with most works focusing on very simple architectures. However, times have changed, and nowadays convolutional neural networks are the industry and academia standard for solving a variety of problems, many of which remained unsolved before the discovery of this kind of networks. Convolutional neural networks involve complex topologies, and the manual design of these topologies for solving a problem at hand is expensive and inefficient. In this thesis, our aim is to use neuroevolution in order to evolve the architecture of convolutional neural networks. To do so, we have decided to try two different techniques: genetic algorithms and grammatical evolution. We have implemented a niching scheme for preserving the genetic diversity, in order to ease the construction of ensembles of neural networks. These techniques have been validated against the MNIST database for handwritten digit recognition, achieving a test error rate of 0.28%, and the OPPORTUNITY data set for human activity recognition, attaining an F1 score of 0.9275. Both results have proven very competitive when compared with the state of the art. Also, in all cases, ensembles have proven to perform better than individual models. Later, the topologies learned for MNIST were tested on EMNIST, a database recently introduced in 2017, which includes more samples and a set of letters for character recognition. Results have shown that the topologies optimized for MNIST perform well on EMNIST, proving that architectures can be reused across domains with similar characteristics. In summary, neuroevolution is an effective approach for automatically designing topologies for convolutional neural networks. However, it still remains as an unexplored field due to hardware limitations. Current advances, however, should constitute the fuel that empowers the emergence of this field, and further research should start as of today.This Ph.D. dissertation has been partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with identifier FPU13/03917. This research stay has been partially co-funded by the Spanish Ministry of Education, Culture and Sports under FPU short stay grant with identifier EST15/00260.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: María Araceli Sanchís de Miguel.- Secretario: Francisco Javier Segovia Pérez.- Vocal: Simon Luca

    Re-purposing Heterogeneous Generative Ensembles with Evolutionary Computation

    Full text link
    Generative Adversarial Networks (GANs) are popular tools for generative modeling. The dynamics of their adversarial learning give rise to convergence pathologies during training such as mode and discriminator collapse. In machine learning, ensembles of predictors demonstrate better results than a single predictor for many tasks. In this study, we apply two evolutionary algorithms (EAs) to create ensembles to re-purpose generative models, i.e., given a set of heterogeneous generators that were optimized for one objective (e.g., minimize Frechet Inception Distance), create ensembles of them for optimizing a different objective (e.g., maximize the diversity of the generated samples). The first method is restricted by the exact size of the ensemble and the second method only restricts the upper bound of the ensemble size. Experimental analysis on the MNIST image benchmark demonstrates that both EA ensembles creation methods can re-purpose the models, without reducing their original functionality. The EA-based demonstrate significantly better performance compared to other heuristic-based methods. When comparing both evolutionary, the one with only an upper size bound on the ensemble size is the best.Comment: Accepted as a full paper for the Genetic and Evolutionary Computation Conference - GECCO'2

    Neuroevolution under unimodal error landscapes : an exploration of the semantic learning machine algorithm

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceNeuroevolution is a field in which evolutionary algorithms are applied with the goal of evolving Artificial Neural Networks (ANNs). These evolutionary approaches can be used to evolve ANNs with fixed or dynamic topologies. This paper studies the Semantic Learning Machine (SLM) algorithm, a recently proposed neuroevolution method that searches over unimodal error landscapes in any supervised learning problem, where the error is measured as a distance to the known targets. SLM is compared with the topology-changing algorithm NeuroEvolution of Augmenting Topologies (NEAT) and with a fixed-topology neuroevolution approach. Experiments are performed on a total of 6 real-world datasets of classification and regression tasks. The results show that the best SLM variants outperform the other neuroevolution approaches in terms of generalization achieved, while also being more efficient in learning the training data. Further comparisons show that the best SLM variants also outperform the common ANN backpropagation-based approach under different topologies. A combination of the SLM with a recently proposed semantic stopping criterion also shows that it is possible to evolve competitive neural networks in a few seconds on the vast majority of the datasets considered.Neuro evolução é uma área onde algoritmos evolucionários são aplicados com o objetivo de evoluir Artificial Neural Networks (ANN). Estas abordagens evolucionárias podem ser utilizadas para evoluir ANNs com topologias fixas ou dinâmicas. Este artigo estuda o algoritmo de Semantic Learning Machine (SLM), um método de neuro evolução proposto recentemente que percorre paisagens de erros unimodais em qualquer problema de aprendizagem supervisionada, onde o erro é medido como a distância com os alvos conhecidos previamente. SLM é comparado com o algoritmo de alteração de topologias NeuroEvolution of Augmenting Topologies (NEAT) e com uma abordagem neuro evolucionária de topologias fixas. Experiências são realizadas em 6 datasets reais de tarefas de regressão e classificação. Os resultados mostram que as melhores variantes de SLM são mais capazes de generalizar quando comparadas com outras abordagens de neuro evolução, ao mesmo tempo que são mais eficientes no processo de treino. Mais comparações mostram que as melhores variantes de SLM são mais eficazes que as abordagens mais comuns de treino de ANN usando diferentes topologias e retro propagação. A combinação de SLM com um critério semântico de paragem do processo de treino também mostra que é possível criar redes neuronais competitivas em poucos segundos, na maioria dos datasets considerados

    Machine learning for network based intrusion detection: an investigation into discrepancies in findings with the KDD cup '99 data set and multi-objective evolution of neural network classifier ensembles from imbalanced data.

    Get PDF
    For the last decade it has become commonplace to evaluate machine learning techniques for network based intrusion detection on the KDD Cup '99 data set. This data set has served well to demonstrate that machine learning can be useful in intrusion detection. However, it has undergone some criticism in the literature, and it is out of date. Therefore, some researchers question the validity of the findings reported based on this data set. Furthermore, as identified in this thesis, there are also discrepancies in the findings reported in the literature. In some cases the results are contradictory. Consequently, it is difficult to analyse the current body of research to determine the value in the findings. This thesis reports on an empirical investigation to determine the underlying causes of the discrepancies. Several methodological factors, such as choice of data subset, validation method and data preprocessing, are identified and are found to affect the results significantly. These findings have also enabled a better interpretation of the current body of research. Furthermore, the criticisms in the literature are addressed and future use of the data set is discussed, which is important since researchers continue to use it due to a lack of better publicly available alternatives. Due to the nature of the intrusion detection domain, there is an extreme imbalance among the classes in the KDD Cup '99 data set, which poses a significant challenge to machine learning. In other domains, researchers have demonstrated that well known techniques such as Artificial Neural Networks (ANNs) and Decision Trees (DTs) often fail to learn the minor class(es) due to class imbalance. However, this has not been recognized as an issue in intrusion detection previously. This thesis reports on an empirical investigation that demonstrates that it is the class imbalance that causes the poor detection of some classes of intrusion reported in the literature. An alternative approach to training ANNs is proposed in this thesis, using Genetic Algorithms (GAs) to evolve the weights of the ANNs, referred to as an Evolutionary Neural Network (ENN). When employing evaluation functions that calculate the fitness proportionally to the instances of each class, thereby avoiding a bias towards the major class(es) in the data set, significantly improved true positive rates are obtained whilst maintaining a low false positive rate. These findings demonstrate that the issues of learning from imbalanced data are not due to limitations of the ANNs; rather the training algorithm. Moreover, the ENN is capable of detecting a class of intrusion that has been reported in the literature to be undetectable by ANNs. One limitation of the ENN is a lack of control of the classification trade-off the ANNs obtain. This is identified as a general issue with current approaches to creating classifiers. Striving to create a single best classifier that obtains the highest accuracy may give an unfruitful classification trade-off, which is demonstrated clearly in this thesis. Therefore, an extension of the ENN is proposed, using a Multi-Objective GA (MOGA), which treats the classification rate on each class as a separate objective. This approach produces a Pareto front of non-dominated solutions that exhibit different classification trade-offs, from which the user can select one with the desired properties. The multi-objective approach is also utilised to evolve classifier ensembles, which yields an improved Pareto front of solutions. Furthermore, the selection of classifier members for the ensembles is investigated, demonstrating how this affects the performance of the resultant ensembles. This is a key to explaining why some classifier combinations fail to give fruitful solutions
    corecore