47,350 research outputs found

    Go artificial intelligence: a scalable evolutionary approach

    Get PDF
    Master's Project (M.S.) University of Alaska Fairbanks, 2016This report covers scaling neural networks for training Go artificial intelligence. The Go board is broken up into subsections, allowing for each subsection to be calculated independently, and then factored into an overall board evaluation. This modular approach allows for subsection networks to be translated to larger board evaluations, retaining knowledge gained. The methodology covered shows promise for significant reduction in training times required for unsupervised training of Go AI. A brief history of artificial neural networks and an overview of Go and the specific rules that were used in this project are presented. Experiment design and results are presented, showing a promising proof of concept for reducing training time required for evolutionary Go AI. The codebase for the project is Apache 2.0 licensed and is available on GitHub

    µG2-ELM: an upgraded implementation of µ G-ELM

    Get PDF
    µG-ELM is a multiobjective evolutionary algorithm which looks for the best (in terms of the MSE) and most compact artificial neural network using the ELM methodology. In this work we present the µG2-ELM, an upgraded version of µG-ELM, previously presented by the authors. The upgrading is based on three key elements: a specifically designed approach for the initialization of the weights of the initial artificial neural networks, the introduction of a re-sowing process when selecting the population to be evolved and a change of the process used to modify the weights of the artificial neural networks. To test our proposal we consider several state-of-the-art Extreme Learning Machine (ELM) algorithms and we confront them using a wide and well-known set of continuous, regression and classification problems. From the conducted experiments it is proved that the µG2-ELM shows a better general performance than the previous version and also than other competitors. Therefore, we can guess that the combination of evolutionary algorithms with the ELM methodology is a promising subject of study since both together allow for the design of better training algorithms for artificial neural networks

    Intrinsically Evolvable Artificial Neural Networks

    Get PDF
    Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented

    Evolutionary artificial neural network based on Chemical Reaction Optimization

    Get PDF
    Evolutionary algorithms (EAs) are very popular tools to design and evolve artificial neural networks (ANNs), especially to train them. These methods have advantages over the conventional backpropagation (BP) method because of their low computational requirement when searching in a large solution space. In this paper, we employ Chemical Reaction Optimization (CRO), a newly developed global optimization method, to replace BP in training neural networks. CRO is a population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction. Simulation results show that CRO outperforms many EA strategies commonly used to train neural networks. © 2011 IEEE.published_or_final_versionThe 2011 IEEE Congress on Evolutionary Computation (CEC 2011), New Orleans, LA., 5-8 June 2011. In Proceedings of CEC 2011, 2011, p. 2083-209

    Biologically inspired evolutionary temporal neural circuits

    Get PDF
    Biological neural networks have always motivated creation of new artificial neural networks, and in this case a new autonomous temporal neural network system. Among the more challenging problems of temporal neural networks are the design and incorporation of short and long-term memories as well as the choice of network topology and training mechanism. In general, delayed copies of network signals can form short-term memory (STM), providing a limited temporal history of events similar to FIR filters, whereas the synaptic connection strengths as well as delayed feedback loops (ER circuits) can constitute longer-term memories (LTM). This dissertation introduces a new general evolutionary temporal neural network framework (GETnet) through automatic design of arbitrary neural networks with STM and LTM. GETnet is a step towards realization of general intelligent systems that need minimum or no human intervention and can be applied to a broad range of problems. GETnet utilizes nonlinear moving average/autoregressive nodes and sub-circuits that are trained by enhanced gradient descent and evolutionary search in terms of architecture, synaptic delay, and synaptic weight spaces. The mixture of Lamarckian and Darwinian evolutionary mechanisms facilitates the Baldwin effect and speeds up the hybrid training. The ability to evolve arbitrary adaptive time-delay connections enables GETnet to find novel answers to many classification and system identification tasks expressed in the general form of desired multidimensional input and output signals. Simulations using Mackey-Glass chaotic time series and fingerprint perspiration-induced temporal variations are given to demonstrate the above stated capabilities of GETnet

    Optimization of Evolutionary Neural Networks Using Hybrid Learning Algorithms

    Full text link
    Evolutionary artificial neural networks (EANNs) refer to a special class of artificial neural networks (ANNs) in which evolution is another fundamental form of adaptation in addition to learning. Evolutionary algorithms are used to adapt the connection weights, network architecture and learning algorithms according to the problem environment. Even though evolutionary algorithms are well known as efficient global search algorithms, very often they miss the best local solutions in the complex solution space. In this paper, we propose a hybrid meta-heuristic learning approach combining evolutionary learning and local search methods (using 1st and 2nd order error information) to improve the learning and faster convergence obtained using a direct evolutionary approach. The proposed technique is tested on three different chaotic time series and the test results are compared with some popular neuro-fuzzy systems and a recently developed cutting angle method of global optimization. Empirical results reveal that the proposed technique is efficient in spite of the computational complexity

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    • …
    corecore