33 research outputs found

    Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training

    Get PDF
    BACKGROUND: Particle Swarm Optimization (PSO) is an established method for parameter optimization. It represents a population-based adaptive optimization technique that is influenced by several "strategy parameters". Choosing reasonable parameter values for the PSO is crucial for its convergence behavior, and depends on the optimization task. We present a method for parameter meta-optimization based on PSO and its application to neural network training. The concept of the Optimized Particle Swarm Optimization (OPSO) is to optimize the free parameters of the PSO by having swarms within a swarm. We assessed the performance of the OPSO method on a set of five artificial fitness functions and compared it to the performance of two popular PSO implementations. RESULTS: Our results indicate that PSO performance can be improved if meta-optimized parameter sets are applied. In addition, we could improve optimization speed and quality on the other PSO methods in the majority of our experiments. We applied the OPSO method to neural network training with the aim to build a quantitative model for predicting blood-brain barrier permeation of small organic molecules. On average, training time decreased by a factor of four and two in comparison to the other PSO methods, respectively. By applying the OPSO method, a prediction model showing good correlation with training-, test- and validation data was obtained. CONCLUSION: Optimizing the free parameters of the PSO method can result in performance gain. The OPSO approach yields parameter combinations improving overall optimization performance. Its conceptual simplicity makes implementing the method a straightforward task

    Atomistic Mathematical Theory for Metaheuristic Structures of Global Optimization Algorithms in Evolutionary Machine Learning for Power Systems

    Get PDF
    Global Optimization in the 4D nonlinear landscape generates kinds and types of particles, waves and extremals of power sets and singletons. In this chapter these are demonstrated for ranges of optimal problem-solving solution algorithms. Here, onts, particles, or atoms, of the ontological blueprint are generated inherently from the fractional optimization algorithms in Metaheuristic structures of computational evolutionary development. These stigmergetics are applicable to incremental machine learning regimes for computational power generation and relay, and information management systems

    Training Neural Networks for Financial Forecasting: Backpropagation vs Particle Swarm Optimization

    Get PDF
    Neural networks (NN) architectures can be effectively used to classify, forecast and recognize quantity of interest in, e.g., computer vision, machine translation, finance, etc. Concerning the financial framework, fore- casting procedures are often used as a part of the decision making process in both trading and portfolio strategy optimization. Unfortunately training a NN is in general a challenging task mainly because of the high number of parameters involved. In particular, a typical NN is based on a large number of layers, each of which may be composed by several neurons , moreover, for every component, normalization as well as training algorithms, have to be performed. One of the most popular method to overcome such difficulties is represented by the so called back propagation algorithm . Other possibilities are represented by genetic algorithms , and, in this family, the swarm particle optimization method seems to be rather promising. In this paper we want to compare canonical back- propagation and the swarm particle optimization algorithm in minimizing the error on surface created by financial time series, particularly concerning the task of forecast up/down movements for the assets we are interested in

    A Modified Bayesian Optimization based Hyper-Parameter Tuning Approach for Extreme Gradient Boosting

    Full text link
    It is already reported in the literature that the performance of a machine learning algorithm is greatly impacted by performing proper Hyper-Parameter optimization. One of the ways to perform Hyper-Parameter optimization is by manual search but that is time consuming. Some of the common approaches for performing Hyper-Parameter optimization are Grid search Random search and Bayesian optimization using Hyperopt. In this paper, we propose a brand new approach for hyperparameter improvement i.e. Randomized-Hyperopt and then tune the hyperparameters of the XGBoost i.e. the Extreme Gradient Boosting algorithm on ten datasets by applying Random search, Randomized-Hyperopt, Hyperopt and Grid Search. The performances of each of these four techniques were compared by taking both the prediction accuracy and the execution time into consideration. We find that the Randomized-Hyperopt performs better than the other three conventional methods for hyper-paramter optimization of XGBoost.Comment: Pre-review version of the paper submitted to IEEE 2019 Fifteenth International Conference on Information Processing (ICINPRO). The paper is accepted for publicatio

    Optimization of ANN Structure Using Adaptive PSO & GA and Performance Analysis Based on Boolean Identities

    Get PDF
    In this paper, a novel heuristic structure optimization technique is proposed for Neural Network using Adaptive PSO & GA on Boolean identities to improve the performance of Artificial Neural Network (ANN). The selection of the optimal number of hidden layers and nodes has a significant impact on the performance of a neural network, is decided in an adhoc manner. The optimization of architecture and weights of neural network is a complex task. In this regard the use of evolutionary techniques based on Adaptive Particle Swarm Optimization (APSO) & Adaptive Genetic Algorithm (AGA) is used for selecting an optimal number of hidden layers and nodes of the neural controller, for better performance and low training errors through Boolean identities. The hidden nodes are adapted through the generation until they reach the optimal number. The Boolean operators such as AND, OR, XOR have been used for performance analysis of this technique

    Particle Swarm Optimization Algorithm with a Bio-Inspired Aging Model

    Get PDF
    A Particle Swarm Optimization with a Bio-inspired Aging Model (BAM-PSO) algorithm is proposed to alleviate the premature convergence problem of other PSO algorithms. Each particle within the swarm is subjected to aging based on the age-related changes observed in immune system cells. The proposed algorithm is tested with several popular and well-established benchmark functions and its performance is compared to other evolutionary algorithms in both low and high dimensional scenarios. Simulation results reveal that at the cost of computational time, the proposed algorithm has the potential to solve the premature convergence problem that affects PSO-based algorithms; showing good results for both low and high dimensional problems. This work suggests that aging mechanisms do have further implications in computational intelligence

    The Pursuit of Evolutionary Particle Swarm Optimization

    Get PDF

    Survey of Meta-Heuristic Algorithms for Deep Learning Training

    Get PDF
    Deep learning (DL) is a type of machine learning that mimics the thinking patterns of a human brain to learn the new abstract features automatically by deep and hierarchical layers. DL is implemented by deep neural network (DNN) which has multi-hidden layers. DNN is developed from traditional artificial neural network (ANN). However, in the training process of DL, it has certain inefficiency due to very long training time required. Meta-heuristic aims to find good or near-optimal solutions at a reasonable computational cost. In this article, meta-heuristic algorithms are reviewed, such as genetic algorithm (GA) and particle swarm optimization (PSO), for traditional neural network’s training and parameter optimization. Thereafter the possibilities of applying meta-heuristic algorithms on DL training and parameter optimization are discussed

    Dynamic particle swarm optimization of biomolecular simulation parameters with flexible objective functions

    Get PDF
    Molecular simulations are a powerful tool to complement and interpret ambiguous experimental data on biomolecules to obtain structural models. Such data-assisted simulations often rely on parameters, the choice of which is highly non-trivial and crucial to performance. The key challenge is weighting experimental information with respect to the underlying physical model. We introduce FLAPS, a self-adapting variant of dynamic particle swarm optimization, to overcome this parameter selection problem. FLAPS is suited for the optimization of composite objective functions that depend on both the optimization parameters and additional, a priori unknown weighting parameters, which substantially influence the search-space topology. These weighting parameters are learned at runtime, yielding a dynamically evolving and iteratively refined search-space topology. As a practical example, we show how FLAPS can be used to find functional parameters for small-angle X-ray scattering-guided protein simulations
    corecore