302 research outputs found

    An Effective Ensemble Approach for Spam Classification

    Get PDF
    The annoyance of spam increasingly plagues both individuals and organizations. Spam classification is an important issue to distinguish the spam with the legitimate email or address. This paper presents a neural network ensemble approach based on a specially designed cooperative coevolution paradigm. Each component network corresponds to a separate subpopulation and all subpopulations are evolved simultaneously. The ensemble performance and the Q-statistic diversity measure are adopted as the objectives, and the component networks are evaluated by using the multi-objective Pareto optimality measure. Experimental results illustrate that the proposed algorithm outperforms the traditional ensemble methods on the spam classification problems

    Astrocytes organize associative memory

    Get PDF
    We investigate one aspect of the functional role played by astrocytes in neuron-astrocyte networks present in the mammal brain. To highlight the effect of neuron-astrocyte interaction, we consider simplified networks with bidirectional neuron-astrocyte communication and without any connections between neurons. We show that the fact, that astrocyte covers several neurons and a different time scale of calcium events in astrocyte, alone can lead to the appearance of neural associative memory. Without any doubt, this mechanism makes the neuron networks more flexible to learning, and, hence, may contribute to the explanation, why astrocytes have been evolutionary needed for the development of the mammal brain

    Pareto-Based Multiobjective Machine Learning: An Overview and Case Studies

    Full text link

    Problem Decomposition and Adaptation in Cooperative Neuro-Evolution

    No full text
    One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness
    • …
    corecore