705 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Survey analysis for optimization algorithms applied to electroencephalogram

    Get PDF
    This paper presents a survey for optimization approaches that analyze and classify Electroencephalogram (EEG) signals. The automatic analysis of EEG presents a significant challenge due to the high-dimensional data volume. Optimization algorithms seek to achieve better accuracy by selecting practical features and reducing unwanted features. Forty-seven reputable research papers are provided in this work, emphasizing the developed and executed techniques divided into seven groups based on the applied optimization algorithm particle swarm optimization (PSO), ant colony optimization (ACO), artificial bee colony (ABC), grey wolf optimizer (GWO), Bat, Firefly, and other optimizer approaches). The main measures to analyze this paper are accuracy, precision, recall, and F1-score assessment. Several datasets have been utilized in the included papers like EEG Bonn University, CHB-MIT, electrocardiography (ECG) dataset, and other datasets. The results have proven that the PSO and GWO algorithms have achieved the highest accuracy rate of around 99% compared with other techniques

    Echo state network optimization using binary grey wolf algorithm

    Get PDF
    The echo state network (ESN) is a powerful recurrent neural network for time series modelling. ESN inherits the simplified structure and relatively straightforward training process of conventional neural networks, and shows strong computational capabilities to solve nonlinear problems. It is able to map low-dimensional input signals to high-dimensional space for information extraction, but it is found that not every dimension of the reservoir output directly contributes to the model generalization. This work aims to improve the generalization capabilities of the ESN model by reducing the redundant reservoir output features. A novel hybrid model, namely binary grey wolf echo state network (BGWO-ESN), is proposed which optimises the ESN output connection by the feature selection scheme. Specially, the feature selection scheme of BGWO is developed to improve the ESN output connection structure. The proposed method is evaluated using synthetic and financial data sets. Experimental results demonstrate that the proposed BGWO-ESN model is more effective than other benchmarks, and obtains the lowest generalization error

    An Improved Bees Algorithm for Training Deep Recurrent Networks for Sentiment Classification

    Get PDF
    Recurrent neural networks (RNNs) are powerful tools for learning information from temporal sequences. Designing an optimum deep RNN is difficult due to configuration and training issues, such as vanishing and exploding gradients. In this paper, a novel metaheuristic optimisation approach is proposed for training deep RNNs for the sentiment classification task. The approach employs an enhanced Ternary Bees Algorithm (BA-3+), which operates for large dataset classification problems by considering only three individual solutions in each iteration. BA-3+ combines the collaborative search of three bees to find the optimal set of trainable parameters of the proposed deep recurrent learning architecture. Local learning with exploitative search utilises the greedy selection strategy. Stochastic gradient descent (SGD) learning with singular value decomposition (SVD) aims to handle vanishing and exploding gradients of the decision parameters with the stabilisation strategy of SVD. Global learning with explorative search achieves faster convergence without getting trapped at local optima to find the optimal set of trainable parameters of the proposed deep recurrent learning architecture. BA-3+ has been tested on the sentiment classification task to classify symmetric and asymmetric distribution of the datasets from different domains, including Twitter, product reviews, and movie reviews. Comparative results have been obtained for advanced deep language models and Differential Evolution (DE) and Particle Swarm Optimization (PSO) algorithms. BA-3+ converged to the global minimum faster than the DE and PSO algorithms, and it outperformed the SGD, DE, and PSO algorithms for the Turkish and English datasets. The accuracy value and F1 measure have improved at least with a 30–40% improvement than the standard SGD algorithm for all classification datasets. Accuracy rates in the RNN model trained with BA-3+ ranged from 80% to 90%, while the RNN trained with SGD was able to achieve between 50% and 60% for most datasets. The performance of the RNN model with BA-3+ has as good as for Tree-LSTMs and Recursive Neural Tensor Networks (RNTNs) language models, which achieved accuracy results of up to 90% for some datasets. The improved accuracy and convergence results show that BA-3+ is an efficient, stable algorithm for the complex classification task, and it can handle the vanishing and exploding gradients problem of deep RNNs

    Adapting Swarm Intelligence For The Self-Assembly And Optimization Of Networks

    Get PDF
    While self-assembly is a fairly active area of research in swarm intelligence and robotics, relatively little attention has been paid to the issues surrounding the construction of network structures. Here, methods developed previously for modeling and controlling the collective movements of groups of agents are extended to serve as the basis for self-assembly or "growth" of networks, using neural networks as a concrete application to evaluate this novel approach. One of the central innovations incorporated into the model presented here is having network connections arise as persistent "trails" left behind moving agents, trails that are reminiscent of pheromone deposits made by agents in ant colony optimization models. The resulting network connections are thus essentially a record of agent movements. The model's effectiveness is demonstrated by using it to produce two large networks that support subsequent learning of topographic and feature maps. Improvements produced by the incorporation of collective movements are also examined through computational experiments. These results indicate that methods for directing collective movements can be extended to support and facilitate network self-assembly. Additionally, the traditional self-assembly problem is extended to include the generation of network structures based on optimality criteria, rather than on target structures that are specified a priori. It is demonstrated that endowing the network components involved in the self-assembly process with the ability to engage in collective movements can be an effective means of generating computationally optimal network structures. This is confirmed on a number of challenging test problems from the domains of trajectory generation, time-series forecasting, and control. Further, this extension of the model is used to illuminate an important relationship between particle swarm optimization, which usually occurs in high dimensional abstract spaces, and self-assembly, which is normally grounded in real and simulated 2D and 3D physical spaces

    A Brief Review on Mathematical Tools Applicable to Quantum Computing for Modelling and Optimization Problems in Engineering

    Get PDF
    Since its emergence, quantum computing has enabled a wide spectrum of new possibilities and advantages, including its efficiency in accelerating computational processes exponentially. This has directed much research towards completely novel ways of solving a wide variety of engineering problems, especially through describing quantum versions of many mathematical tools such as Fourier and Laplace transforms, differential equations, systems of linear equations, and optimization techniques, among others. Exploration and development in this direction will revolutionize the world of engineering. In this manuscript, we review the state of the art of these emerging techniques from the perspective of quantum computer development and performance optimization, with a focus on the most common mathematical tools that support engineering applications. This review focuses on the application of these mathematical tools to quantum computer development and performance improvement/optimization. It also identifies the challenges and limitations related to the exploitation of quantum computing and outlines the main opportunities for future contributions. This review aims at offering a valuable reference for researchers in fields of engineering that are likely to turn to quantum computing for solutions. Doi: 10.28991/ESJ-2023-07-01-020 Full Text: PD

    Current Studies and Applications of Krill Herd and Gravitational Search Algorithms in Healthcare

    Full text link
    Nature-Inspired Computing or NIC for short is a relatively young field that tries to discover fresh methods of computing by researching how natural phenomena function to find solutions to complicated issues in many contexts. As a consequence of this, ground-breaking research has been conducted in a variety of domains, including synthetic immune functions, neural networks, the intelligence of swarm, as well as computing of evolutionary. In the domains of biology, physics, engineering, economics, and management, NIC techniques are used. In real-world classification, optimization, forecasting, and clustering, as well as engineering and science issues, meta-heuristics algorithms are successful, efficient, and resilient. There are two active NIC patterns: the gravitational search algorithm and the Krill herd algorithm. The study on using the Krill Herd Algorithm (KH) and the Gravitational Search Algorithm (GSA) in medicine and healthcare is given a worldwide and historical review in this publication. Comprehensive surveys have been conducted on some other nature-inspired algorithms, including KH and GSA. The various versions of the KH and GSA algorithms and their applications in healthcare are thoroughly reviewed in the present article. Nonetheless, no survey research on KH and GSA in the healthcare field has been undertaken. As a result, this work conducts a thorough review of KH and GSA to assist researchers in using them in diverse domains or hybridizing them with other popular algorithms. It also provides an in-depth examination of the KH and GSA in terms of application, modification, and hybridization. It is important to note that the goal of the study is to offer a viewpoint on GSA with KH, particularly for academics interested in investigating the capabilities and performance of the algorithm in the healthcare and medical domains.Comment: 35 page
    corecore