14 research outputs found

    Dynamic Training Data Use, Ensemble Construction Methods, and Deep Learning Perspectives

    No full text
    Gonçalves, I., Seca, M., & Castelli, M. (2020). Explorations of the Semantic Learning Machine Neuroevolution Algorithm: Dynamic Training Data Use, Ensemble Construction Methods, and Deep Learning Perspectives. In W. Banzhaf, E. Goodman, L. Sheneman, L. Trujillo, & B. Worzel (Eds.), Genetic Programming Theory and Practice XVII: Genetic and Evolutionary Computation (pp. 39-62). [Chapter 3] (Genetic Programming Theory and Practice XVII). Springer. https://doi.org/10.1007/978-3-030-39958-0_3The recently proposed Semantic Learning Machine (SLM) neuroevolution algorithm is able to construct Neural Networks (NNs) over unimodal error landscapes in any supervised learning problem where the error is measured as a distance to the known targets. This chapter studies how different methods of dynamically using the training data affect the resulting generalization of the SLM algorithm. Across four real-world binary classification datasets, SLM is shown to outperform the Multi-layer Perceptron, with statistical significance, after parameter tuning is performed in both algorithms. Furthermore, this chapter also studies how different ensemble constructions methods influence the resulting generalization. The results show that the stochastic nature of SLM already confers enough diversity to the ensembles such that Bagging and Boosting cannot improve upon a simple averaging ensemble construction method. Finally, some initial results with SLM and Convolutional NNs are presented and future Deep Learning perspectives are discussed.authorsversionpublishe
    corecore