139 research outputs found
A Cluster-Based Opposition Differential Evolution Algorithm Boosted by a Local Search for ECG Signal Classification
Electrocardiogram (ECG) signals, which capture the heart's electrical
activity, are used to diagnose and monitor cardiac problems. The accurate
classification of ECG signals, particularly for distinguishing among various
types of arrhythmias and myocardial infarctions, is crucial for the early
detection and treatment of heart-related diseases. This paper proposes a novel
approach based on an improved differential evolution (DE) algorithm for ECG
signal classification for enhancing the performance. In the initial stages of
our approach, the preprocessing step is followed by the extraction of several
significant features from the ECG signals. These extracted features are then
provided as inputs to an enhanced multi-layer perceptron (MLP). While MLPs are
still widely used for ECG signal classification, using gradient-based training
methods, the most widely used algorithm for the training process, has
significant disadvantages, such as the possibility of being stuck in local
optimums. This paper employs an enhanced differential evolution (DE) algorithm
for the training process as one of the most effective population-based
algorithms. To this end, we improved DE based on a clustering-based strategy,
opposition-based learning, and a local search. Clustering-based strategies can
act as crossover operators, while the goal of the opposition operator is to
improve the exploration of the DE algorithm. The weights and biases found by
the improved DE algorithm are then fed into six gradient-based local search
algorithms. In other words, the weights found by the DE are employed as an
initialization point. Therefore, we introduced six different algorithms for the
training process (in terms of different local search algorithms). In an
extensive set of experiments, we showed that our proposed training algorithm
could provide better results than the conventional training algorithms.Comment: 44 pages, 9 figure
A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications
Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms
Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field
A Proposed Framework to Improve Diagnosis of Covid-19 Based on Patient’s Symptoms using Feature Selection Optimization
Recently, an epidemic called COVID-19 appeared, and it was one of the largest epidemics that affected the world in all economic, educational, health, and other aspects due to its rapid spread worldwide. The surge in infection rates made traditional diagnostic methods ineffective. Systems for automatic diagnosis and detection are crucial for controlling the outbreak. Other than PCR-RT, further diagnostic and detection techniques are needed. Individuals who receive positive test results often experience a range of symptoms, ranging from mild to severe, including coughing, fever, sore throats, and body pains. In more extreme cases, infected individuals may exhibit severe symptoms that make breathing challenging, ultimately leading to catastrophic organ failure. A hybrid approach called SDO-NMR-Hill has been developed for diagnosing COVID-19 based on a patient’s initial symptoms. This approach incorporates traits from three models, including two distinct feature selection optimization methods and a local search. Supply-demand optimization and the naked mole rat were preferred among metaheuristic methods because they have fewer parameters and a lower computing overhead, which can help you find superfluous and uninformative characteristics. Hill climbing was preferred among local search methods to maximize a criterion among several candidate solutions. We used decision trees, random forests, and adaptive boosting machine-learning classifiers in various experiments on three COVID-19 datasets. We carried out a natural selection of the classifier’s hyper-parameters to optimize outcomes. The optimal performance was attained using the adaptive boosting classifier, with an accuracy of 88.88% and 98.98% for the first and third datasets, respectively. The optimal performance for the second dataset was attained using the random forest classifier, with an accuracy of 97.97%. The suggested SDO-NMR-Hill model is evaluated using nine benchmark UCI datasets, and 15 different optimization techniques are contrasted
Designing Artificial Neural Network Using Particle Swarm Optimization: A Survey
Neural network modeling has become a special interest for many engineers and scientists to be utilized in different types of data as time series, regression, and classification and have been used to solve complicated practical problems in different areas, such as medicine, engineering, manufacturing, military, business. To utilize a prediction model that is based upon artificial neural network (ANN), some challenges should be addressed that optimal designing and training of ANN are major ones. ANN can be defined as an optimization task because it has many hyper parameters and weights that can be optimized. Metaheuristic algorithms such as swarm intelligence-based methods are a category of optimization methods that aim to find an optimal structure of ANN and to train the network by optimizing the weights. One of the commonly used swarm intelligence-based algorithms is particle swarm optimization (PSO) that can be used for optimizing ANN. In this study, we review the conducted research works on optimizing the ANNs using PSO. All studies are reviewed from two different perspectives: optimization of weights and optimization of structure and hyper parameters
Recommended from our members
An Evaluation of Performance Enhancements to Particle Swarm Optimisation on Real-World Data
Swarm Computation is a relatively new optimisation paradigm. The basic premise is to model the collective behaviour of self-organised natural phenomena such as swarms, flocks and shoals, in order to solve optimisation problems. Particle Swarm Optimisation (PSO) is a type of swarm computation inspired by bird flocks or swarms of bees by modelling their collective social influence as they search for optimal solutions.
In many real-world applications of PSO, the algorithm is used as a data pre-processor for a neural network or similar post processing system, and is often extensively modified to suit the application. The thesis introduces techniques that allow unmodified PSO to be applied successfully to a range of problems, specifically three extensions to the basic PSO algorithm: solving optimisation problems by training a hyperspatial matrix, using a hierarchy of swarms to coordinate optimisation on several data sets simultaneously, and dynamic neighbourhood selection in swarms.
Rather than working directly with candidate solutions to an optimisation problem, the PSO algorithm is adapted to train a matrix of weights, to produce a solution to the problem from the inputs. The search space is abstracted from the problem data.
A single PSO swarm optimises a single data set and has difficulties where the data set comprises disjoint parts (such as time series data for different days). To address this problem, we introduce a hierarchy of swarms, where each child swarm optimises one section of the data set whose gbest particle is a member of the swarm above in the hierarchy. The parent swarm(s) coordinate their children and encourage more exploration of the solution space. We show that hierarchical swarms of this type perform better than single swarm PSO optimisers on the disjoint data sets used.
PSO relies on interaction between particles within a neighbourhood to find good solutions. In many PSO variants, possible interactions are arbitrary and fixed on initialisation. Our third contribution is a dynamic neighbourhood selection: particles can modify their neighbourhood, based on the success of the candidate neighbour particle. As PSO is intended to reflect the social interaction of agents, this change significantly increases the ability of the swarm to find optimal solutions. Applied to real-world medical and cosmological data, this modification is and shows improvements over standard PSO approaches with fixed neighbourhoods
Advances in Artificial Intelligence: Models, Optimization, and Machine Learning
The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications
Applied Metaheuristic Computing
For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
Computational Optimizations for Machine Learning
The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity
- …