24,193 research outputs found

    Genetically Modified Wolf Optimization with Stochastic Gradient Descent for Optimising Deep Neural Networks

    Full text link
    When training Convolutional Neural Networks (CNNs) there is a large emphasis on creating efficient optimization algorithms and highly accurate networks. The state-of-the-art method of optimizing the networks is done by using gradient descent algorithms, such as Stochastic Gradient Descent (SGD). However, there are some limitations presented when using gradient descent methods. The major drawback is the lack of exploration, and over-reliance on exploitation. Hence, this research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms. A hybrid between Grey Wolf Optimizer (GWO) and Genetic Algorithms (GA) is explored, in conjunction with SGD; producing a Genetically Modified Wolf optimization algorithm boosted with SGD (GMW-SGD). This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality, affecting the performance of standard metaheuristic algorithms. The proposed algorithm was trained and tested on CIFAR-10 where it performs comparably to the SGD algorithm, reaching high test accuracy, and significantly outperforms standard metaheuristic algorithms

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Supervised learning with hybrid global optimisation methods

    Get PDF

    Glowworm swarm optimisation for training multi-layer perceptrons

    Get PDF

    Parameters Identification for a Composite Piezoelectric Actuator Dynamics

    Get PDF
    This work presents an approach for identifying the model of a composite piezoelectric (PZT) bimorph actuator dynamics, with the objective of creating a robust model that can be used under various operating conditions. This actuator exhibits nonlinear behavior that can be described using backlash and hysteresis. A linear dynamic model with a damping matrix that incorporates the Bouc–Wen hysteresis model and the backlash operators is developed. This work proposes identifying the actuator’s model parameters using the hybrid master-slave genetic algorithm neural network (HGANN). In this algorithm, the neural network exploits the ability of the genetic algorithm to search globally to optimize its structure, weights, biases and transfer functions to perform time series analysis efficiently. A total of nine datasets (cases) representing three different voltage amplitudes excited at three different frequencies are used to train and validate the model. Four cases are considered for training the NN architecture, connection weights, bias weights and learning rules. The remaining five cases are used to validate the model, which produced results that closely match the experimental ones. The analysis shows that damping parameters are inversely proportional to the excitation frequency. This indicates that the suggested hysteresis model is too general for the PZT model in this work. It also suggests that backlash appears only when dynamic forces become dominant

    Using intelligent optimization methods to improve the group method of data handling in time series prediction

    Get PDF
    In this paper we show how the performance of the basic algorithm of the Group Method of Data Handling (GMDH) can be improved using Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). The new improved GMDH is then used to predict currency exchange rates: the US Dollar to the Euros. The performance of the hybrid GMDHs are compared with that of the conventional GMDH. Two performance measures, the root mean squared error and the mean absolute percentage errors show that the hybrid GMDH algorithm gives more accurate predictions than the conventional GMDH algorithm

    Website Phishing Technique Classification Detection with HSSJAYA Based MLP Training

    Get PDF
    Website phishing technique is the process of stealing personal information (ID number, social media account information, credit card information etc.) of target users through fake websites that are similar to reality by users who do not have good intentions. There are multiple methods in detecting website phishing technique and one of them is multilayer perceptron (MLP), a type of artificial neural networks. The MLP occurs with at least three layers, the input, at least one hidden layer and the output. Data on the network must be trained by passing over neurons. There are multiple techniques in training the network, one of which is training with metaheuristic algorithms. Metaheuristic algorithms that aim to develop more effective hybrid algorithms by combining the good and successful aspects of more than one algorithm are algorithms inspired by nature. In this study, MLP was trained with Hybrid Salp Swarm Jaya (HSSJAYA) and used to determine whether websites are suspicious, phishing or legal. In order to compare the success of MLP trained with hybrid algorithm, Salp Swarm Algorithm (SSA) and Jaya (JAYA) were compared with MLPs trained with Cuckoo Algorithm (CS), Genetic Algorithm (GA) and Firefly Algorithm (FFA). As a result of the experimental and statistical analysis, it was determined that the MLP trained with HSSJAYA was successful in detecting the website phishing technique according to the results of other algorithms

    Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

    Full text link
    An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file
    corecore