405 research outputs found

    A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks

    Get PDF
    info:eu-repo/grantAgreement/FCT/3599-PPCDT/PTDC%2FCCI-INF%2F29168%2F2017/PT" Custode, L. L., Tecce, C. L., Bakurov, I., Castelli, M., Cioppa, A. D., & Vanneschi, L. (2020). A Greedy Iterative Layered Framework for Training Feed Forward Neural Networks. In P. A. Castillo, J. L. Jiménez Laredo, & F. Fernández de Vega (Eds.), Applications of Evolutionary Computation - 23rd European Conference, EvoApplications 2020, Held as Part of EvoStar 2020, Proceedings (pp. 513-529). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12104 LNCS). Springer. https://doi.org/10.1007/978-3-030-43722-0_33In recent years neuroevolution has become a dynamic and rapidly growing research field. Interest in this discipline is motivated by the need to create ad-hoc networks, the topology and parameters of which are optimized, according to the particular problem at hand. Although neuroevolution-based techniques can contribute fundamentally to improving the performance of artificial neural networks (ANNs), they present a drawback, related to the massive amount of computational resources needed. This paper proposes a novel population-based framework, aimed at finding the optimal set of synaptic weights for ANNs. The proposed method partitions the weights of a given network and, using an optimization heuristic, trains one layer at each step while “freezing” the remaining weights. In the experimental study, particle swarm optimization (PSO) was used as the underlying optimizer within the framework and its performance was compared against the standard training (i.e., training that considers the whole set of weights) of the network with PSO and the backward propagation of the errors (backpropagation). Results show that the subsequent training of sub-spaces reduces training time, achieves better generalizability, and leads to the exhibition of smaller variance in the architectural aspects of the network.authorsversionpublishe

    A new evolutionary search strategy for global optimization of high-dimensional problems

    Get PDF
    Global optimization of high-dimensional problems in practical applications remains a major challenge to the research community of evolutionary computation. The weakness of randomization-based evolutionary algorithms in searching high-dimensional spaces is demonstrated in this paper. A new strategy, SP-UCI is developed to treat complexity caused by high dimensionalities. This strategy features a slope-based searching kernel and a scheme of maintaining the particle population's capability of searching over the full search space. Examinations of this strategy on a suite of sophisticated composition benchmark functions demonstrate that SP-UCI surpasses two popular algorithms, particle swarm optimizer (PSO) and differential evolution (DE), on high-dimensional problems. Experimental results also corroborate the argument that, in high-dimensional optimization, only problems with well-formative fitness landscapes are solvable, and slope-based schemes are preferable to randomization-based ones. © 2011 Elsevier Inc. All rights reserved

    Real-Time Automatic Object Classification and Tracking using Genetic Programming and NVIDIA R CUDA TM

    Get PDF
    Genetic Programming (GP) is a widely used methodology for solving various computational problems. GP's problem solving ability is usually hindered by its long execution times. In this thesis, GP is applied toward real-time computer vision. In particular, object classification and tracking using a parallel GP system is discussed. First, a study of suitable GP languages for object classification is presented. Two main GP approaches for visual pattern classification, namely the block-classifiers and the pixel-classifiers, were studied. Results showed that the pixel-classifiers generally performed better. Using these results, a suitable language was selected for the real-time implementation. Synthetic video data was used in the experiments. The goal of the experiments was to evolve a unique classifier for each texture pattern that existed in the video. The experiments revealed that the system was capable of correctly tracking the textures in the video. The performance of the system was on-par with real-time requirements

    Feature Selection of Network Intrusion Data using Genetic Algorithm and Particle Swarm Optimization

    Get PDF
    This paper describes the advantages of using Evolutionary Algorithms (EA) for feature selection on network intrusion dataset. Most current Network Intrusion Detection Systems (NIDS) are unable to detect intrusions in real time because of high dimensional data produced during daily operation. Extracting knowledge from huge data such as intrusion data requires new approach. The more complex the datasets, the higher computation time and the harder they are to be interpreted and analyzed. This paper investigates the performance of feature selection algoritms in network intrusiona data. We used Genetic Algorithms (GA) and Particle Swarm Optimizations (PSO) as feature selection algorithms. When applied to network intrusion datasets, both GA and PSO have significantly reduces the number of features. Our experiments show that GA successfully reduces the number of attributes from 41 to 15 while PSO reduces the number of attributes from 41 to 9. Using k Nearest Neighbour (k-NN) as a classifier,the GA-reduced dataset which consists of 37% of original attributes, has accuracy improvement from 99.28% to 99.70% and its execution time is also 4.8 faster than the execution time of original dataset. Using the same classifier, PSO-reduced dataset which consists of 22% of original attributes, has the fastest execution time (7.2 times faster than the execution time of original datasets). However, its accuracy is slightly reduced 0.02% from 99.28% to 99.26%. Overall, both GA and PSO are good solution as feature selection techniques because theyhave shown very good performance in reducing the number of features significantly while still maintaining and sometimes improving the classification accuracy as well as reducing the computation time

    in-depth analysis of SVM kernel learning and its components

    Get PDF
    The performance of support vector machines in non-linearly-separable classification problems strongly relies on the kernel function. Towards an automatic machine learning approach for this technique, many research outputs have been produced dealing with the challenge of automatic learn- ing of good-performing kernels for support vector machines. However, these works have been carried out without a thorough analysis of the set of components that influence the behavior of support vector machines and their interaction with the kernel. These components are related in an in- tricate way and it is difficult to provide a comprehensible analysis of their joint effect. In this paper we try to fill this gap introducing the necessary steps in order to understand these interactions and provide clues for the research community to know where to place the emphasis. First of all, we identify all the factors that affect the final performance of support vector machines in relation to the elicitation of kernels. Next, we analyze the factors independently or in pairs and study the influence each component has on the final classification performance, providing recommendations and insights into the kernel setting for support vector machines.IT1244-19 PID2019-104966GB-I0

    Multidimensional Particle Swarm Optimization for Machine Learning

    Get PDF
    Particle Swarm Optimization (PSO) is a stochastic nature-inspired optimization method. It has been successfully used in several application domains since it was introduced in 1995. It has been especially successful when applied to complicated multimodal problems, where simpler optimization methods, e.g., gradient descent, are not able to find satisfactory results. Multidimensional Particle Swarm Optimization (MD-PSO) and Fractional Global Best Formation (FGBF) are extensions of the basic PSO. MD-PSO allows searching for an optimum also when the solution dimensionality is unknown. With a dedicated dimensional PSO process, MD-PSO can search for optimal solution dimensionality. An interleaved positional PSO process simultaneously searches for the optimal solution in that dimensionality. Both the basic PSO and its multidimensional extension MD-PSO are susceptible to premature convergence. FGBF is a plug-in to (MD-)PSO that can help avoid premature convergence and find desired solutions faster. This thesis focuses on applications of MD-PSO and FGBF in different machine learning tasks.Multiswarm versions of MD-PSO and FGBF are introduced to perform dynamic optimization tasks. In dynamic optimization, the search space slowly changes. The locations of optima move and a former local optimum may transform into a global optimum and vice versa. We exploit multiple swarms to track different optima.In order to apply MD-PSO for clustering tasks, two key questions need to be answered: 1) How to encode the particles to represent different data partitions? 2) How to evaluate the fitness of the particles to evaluate the quality of the solutions proposed by the particle positions? The second question is considered especially carefully in this thesis. An extensive comparison of Clustering Validity Indices (CVIs) commonly used as fitness functions in Particle Swarm Clustering (PSC) is conducted. Furthermore, a novel approach to carry out fitness evaluation, namely Fitness Evaluation with Computational Centroids (FECC) is introduced. FECC gives the same fitness to any particle positions that lead to the same data partition. Therefore, it may save some computational efforts and, above all, it can significantly improve the results obtained by using any of the best performing CVIs as the PSC fitness function.MD-PSO can also be used to evolve different neural networks. The results of training Multilayer Perceptrons (MLPs) using the common Backpropagation (BP) algorithm and a global technique based on PSO are compared. The pros and cons of BP and (MD-)PSO in MLP training are discussed. For training Radial Basis Function Neural Networks (RBFNNs), a novel technique based on class-specific clustering of the training samples is introduced. The proposed approach is compared to the common input and input-output clustering approaches and the benefits of using the class-specific approach are experimentally demonstrated. With the class-specific approach, the training complexity is reduced, while the classification performance of the trained RBFNNs may be improved.Collective Network of Binary Classifiers (CNBC) is an evolutionary semantic classifier consisting of several Networks of Binary Classifiers (NBCs) trained to recognize a certain semantic class. NBCs in turn consist of several Binary Classifiers (BCs), which are trained for a certain feature type. Thanks to its topology and the use of MD-PSO as its evolution technique, incremental training can be easily applied to add new training items, classes, and/or features.In feature synthesis, the objective is to exploit ground truth information to transform the original low-level features into more discriminative ones. To learn an efficient synthesis for a dataset, only a fraction of the data needs to be labeled. The learned synthesis can then be applied on unlabeled data to improve classification or retrieval results. In this thesis, two different feature synthesis techniques are introduced. In the first one, MD-PSO is directly used to find proper arithmetic operations to be applied on the elements of the original low-level feature vectors. In the second approach, feature synthesis is carried out using one-against-all perceptrons. In the latter technique, the best results were obtained when MD-PSO was used to train the perceptrons.In all the mentioned applications excluding MLP training, MD-PSO is used together with FGBF. Overall, MD-PSO and FGBF are indeed versatile tools in machine learning. However, computational limitations constrain their use in currently emerging machine learning systems operating on Big Data. Therefore, in the future, it is necessary to divide complex tasks into smaller subproblems and to conquer the large problems via solving the subproblems where the use of MD-PSO and FGBF becomes feasible. Several applications discussed in this thesis already exploit the divide-and-conquer operation model
    corecore