205 research outputs found
An improved data classification framework based on fractional particle swarm optimization
Particle Swarm Optimization (PSO) is a population based stochastic optimization technique which consist of particles that move collectively in iterations to search for the most optimum solutions. However, conventional PSO is prone to lack of convergence and even stagnation in complex high dimensional-search problems with multiple local optima. Therefore, this research proposed an improved Mutually-Optimized Fractional PSO (MOFPSO) algorithm based on fractional derivatives and small step lengths to ensure convergence to global optima by supplying a fine balance between exploration and exploitation. The proposed algorithm is tested and verified for optimization performance comparison on ten benchmark functions against six existing established algorithms in terms of Mean of Error and Standard Deviation values. The proposed MOFPSO algorithm demonstrated lowest Mean of Error values during the optimization on all benchmark functions through all 30 runs (Ackley = 0.2, Rosenbrock = 0.2, Bohachevsky = 9.36E-06, Easom = -0.95, Griewank = 0.01, Rastrigin = 2.5E-03, Schaffer = 1.31E-06, Schwefel 1.2 = 3.2E-05, Sphere = 8.36E-03, Step = 0). Furthermore, the proposed MOFPSO algorithm is hybridized with Back-Propagation (BP), Elman Recurrent Neural Networks (RNN) and Levenberg-Marquardt (LM) Artificial Neural Networks (ANNs) to propose an enhanced data classification framework, especially for data classification applications. The proposed classification framework is then evaluated for classification accuracy, computational time and Mean Squared Error on five benchmark datasets against seven existing techniques. It can be concluded from the simulation results that the proposed MOFPSO-ERNN classification algorithm demonstrated good classification performance in terms of classification accuracy (Breast Cancer = 99.01%, EEG = 99.99%, PIMA Indian Diabetes = 99.37%, Iris = 99.6%, Thyroid = 99.88%) as compared to the existing hybrid classification techniques. Hence, the proposed technique can be employed to improve the overall classification accuracy and reduce the computational time in data classification applications
A Brief Survey on Intelligent Swarm-Based Algorithms for Solving Optimization Problems
This chapter presents an overview of optimization techniques followed by a brief survey on several swarm-based natural inspired algorithms which were introduced in the last decade. These techniques were inspired by the natural processes of plants, foraging behaviors of insects and social behaviors of animals. These swam intelligent methods have been tested on various standard benchmark problems and are capable in solving a wide range of optimization issues including stochastic, robust and dynamic problems
Power System Stability Analysis using Neural Network
This work focuses on the design of modern power system controllers for
automatic voltage regulators (AVR) and the applications of machine learning
(ML) algorithms to correctly classify the stability of the IEEE 14 bus system.
The LQG controller performs the best time domain characteristics compared to
PID and LQG, while the sensor and amplifier gain is changed in a dynamic
passion. After that, the IEEE 14 bus system is modeled, and contingency
scenarios are simulated in the System Modelica Dymola environment. Application
of the Monte Carlo principle with modified Poissons probability distribution
principle is reviewed from the literature that reduces the total contingency
from 1000k to 20k. The damping ratio of the contingency is then extracted,
pre-processed, and fed to ML algorithms, such as logistic regression, support
vector machine, decision trees, random forests, Naive Bayes, and k-nearest
neighbor. A neural network (NN) of one, two, three, five, seven, and ten hidden
layers with 25%, 50%, 75%, and 100% data size is considered to observe and
compare the prediction time, accuracy, precision, and recall value. At lower
data size, 25%, in the neural network with two-hidden layers and a single
hidden layer, the accuracy becomes 95.70% and 97.38%, respectively. Increasing
the hidden layer of NN beyond a second does not increase the overall score and
takes a much longer prediction time; thus could be discarded for similar
analysis. Moreover, when five, seven, and ten hidden layers are used, the F1
score reduces. However, in practical scenarios, where the data set contains
more features and a variety of classes, higher data size is required for NN for
proper training. This research will provide more insight into the damping
ratio-based system stability prediction with traditional ML algorithms and
neural networks.Comment: Masters Thesis Dissertatio
- …