59,202 research outputs found
CSLM: Levenberg Marquardt based Back Propagation Algorithm Optimized with Cuckoo Search
Training an artificial neural network is an optimization task, since it is desired to find optimal weight sets for a neural network during training process. Traditional training algorithms such as back propagation have some drawbacks such as getting stuck in local minima and slow speed of convergence. This study combines the best features of two algorithms; i.e. Levenberg Marquardt back propagation (LMBP) and Cuckoo Search (CS) for improving the convergence speed of artificial neural networks (ANN) training. The proposed CSLM algorithm is trained on XOR and OR datasets. The experimental results show that the proposed CSLM algorithm has better performance than other similar hybrid variants used in this study
The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems
The back propagation algorithm has been successfully applied to wide range of practical problems. Since this algorithm uses a gradient descent method, it has some limitations which are slow learning convergence velocity and easy convergence to local minima. The convergence behaviour of the back propagation algorithm depends on the choice of initial weights and biases, network topology, learning rate, momentum, activation function and value for the gain in the activation function. Previous researchers demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the current working back propagation algorithm which is Gradien Descent Method with Adaptive Gain by changing the momentum coefficient adaptively for each node. The influence of the adaptive momentum together with adaptive gain on the learning ability of a neural network is analysed. Multilayer feed forward neural networks have been assessed. Physical interpretation of the relationship between the momentum value, the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and current Gradient Descent Method with Adaptive Gain was verified by means of simulation on three benchmark problems. In learning the patterns, the simulations result demonstrate that the proposed algorithm converged faster on Wisconsin breast cancer with an improvement ratio of nearly 1.8, 6.6 on Mushroom problem and 36% better on  Soybean data sets. The results clearly show that the proposed algorithm significantly improves the learning speed of the current gradient descent back-propagatin algorithm
Embedded Applications of MS-PSO-BP on Wind/Storage Power Forecasting
Higher proportion wind power penetration has great impact on grid operation and dispatching, intelligent hybrid algorithm is proposed to cope with inaccurate schedule forecast. Firstly, hybrid algorithm of MS-PSO-BP (Mathematical Statistics, Particle Swarm Optimization, Back Propagation neural network) is proposed to improve the wind power system prediction accuracy. MS is used to optimize artificial neural network training sample, PSO-BP (particle swarm combined with back propagation neural network) is employed on prediction error dynamic revision. From the angle of root mean square error (RMSE), the mean absolute error (MAE) and convergence rate, analysis and comparison of several intelligent algorithms (BP, RBP, PSO-BP, MS-BP, MS-RBP, MS-PSO-BP) are done to verify the availability of the proposed prediction method. Further, due to the physical function of energy storage in improving accuracy of schedule pre-fabrication, a mathematical statistical method is proposed to determine the optimal capacity of the storage batteries in power forecasting based on the historical statistical data of wind farm. Algorithm feasibility is validated by application of experiment simulation and comparative analysis
The effect of data preprocessing on the performance of artificial neural networks techniques for classification problems
The artificial neural network (ANN) has recently been applied in many areas, such as
medical, biology, financial, economy, engineering and so on. It is known as an excellent
classifier of nonlinear input and output numerical data. Improving training efficiency of
ANN based algorithm is an active area of research and numerous papers have been
reviewed in the literature. The performance of Multi-layer Perceptron (MLP) trained
with back-propagation artificial neural network (BP-ANN) method is highly influenced
by the size of the data-sets and the data-preprocessing techniques used. This work
analyzes the advantages of using pre-processing datasets using different techniques in
order to improve the ANN convergence. Specifically Min-Max, Z-Score and Decimal
Scaling Normalization preprocessing techniques were evaluated. The simulation results
showed that the computational efficiency of ANN training process is highly enhanced
when coupled with different preprocessing techniques
Predicting noise-induced hearing loss (NIHL) in TNB workers using GDAM algorithm
Noise is a form of a pollutant that is terrorizing the occupational health experts for
many decades due to its adverse side-effects on the workers in the industry. Noise�Induced Hearing Loss (NIHL) handicap is one out of many health hazards caused
due to excessive exposure to high frequency noise emitted from the machines. A
number of studies have been carried-out to find the significant factors involved in
causing NIHL in industrial workers using Artificial Neural Networks (ANN). Despite
providing useful information on hearing loss, these studies have neglected some
important factors.
The traditional Back-propagation Neural Network (BPNN) is a supervised
Artificial Neural Networks (ANN) algorithm. It is widely used in solving many real
time problems in world. But BPNN possesses a problem of slow convergence and
network stagnancy. Previously, several modifications were suggested to improve the
convergence rate of Gradient Descent Back-propagation algorithm such as careful
selection of initial weights and biases, learning rate, momentum, network topology,
activation function and ‘gain’ value in the activation function.
This research proposed an algorithm for improving the current working
performance of Back-propagation algorithm by adaptively changing the momentum
value and at the same time keeping the ‘gain’ parameter fixed for all nodes in the
neural network. The performance of the proposed method known as ‘Gradient
Descent Method with Adaptive Momentum (GDAM)’ is compared with ‘Gradient
Descent Method with Adaptive Gain (GDM-AG)’ (Nazri, 2007) and ‘Gradient
Descent with Simple Momentum (GDM)’ by performing simulations on
classification problems. The results show that GDAM is a better approach than
previous methods with an accuracy ratio of 1.0 for classification problems like
ix
Thyroid disease, Heart disease, Breast Cancer, Pima Indian Diabetes, Wine Quality,
Australian Credit-card approval problem and Mushroom problem.
The efficiency of the proposed GDAM is further verified by means of
simulations on Noise-Induced Hearing loss (NIHL) audiometric data obtained from
Tenaga Nasional Berhad (TNB). The proposed GDAM shows improved prediction
results on both ears and will be helpful in improving the declining health condition of
industrial workers in Malaysia. At present, only few studies have emerged to predict
NIHL using ANN but have failed to achieve high accuracy. The achievements made
by GDAM has paved way for indicating NIHL in workers before it becomes severe
and cripples him or her for life. GDAM is also helpful in educating the blue collared
employees to avoid noisy environments and remedies against exposure to excessive
noise can be taken in the future to prevent hearing damage
Causative factors of construction and demolition waste generation in Iraq Construction Industry
The construction industry has hurt the environment from the waste generated during
construction activities. Thus, it calls for serious measures to determine the causative
factors of construction waste generated. There are limited studies on factors causing
construction, and demolition (C&D) waste generation, and these limited studies only
focused on the quantification of construction waste. This study took the opportunity to
identify the causative factors for the C&D waste generation and also to determine the
risk level of each causal factor, and the most important minimization methods to
avoiding generating waste. This study was carried out based on the quantitative
approach. A total of 39 factors that causes construction waste generation that has been
identified from the literature review were considered which were then clustered into 4
groups. Improved questionnaire surveys by 38 construction experts (consultants,
contractors and clients) during the pilot study. The actual survey was conducted with
a total of 380 questionnaires, received with a response rate of 83.3%. Data analysis
was performed using SPSS software. Ranking analysis using the mean score approach
found the five most significant causative factors which are poor site management, poor
planning, lack of experience, rework and poor controlling. The result also indicated
that the majority of the identified factors having a high-risk level, in addition, the better
minimization method is environmental awareness. A structural model was developed
based on the 4 groups of causative factors using the Partial Least Squared-Structural
Equation Modelling (PLS-SEM) technique. It was found that the model fits due to the
goodness of fit (GOF ≥ 0.36= 0.658, substantial). Based on the outcome of this study,
39 factors were relevant to the generation of construction and demolition waste in Iraq.
These groups of factors should be avoided during construction works to reduce the
waste generated. The findings of this study are helpful to authorities and stakeholders
in formulating laws and regulations. Furthermore, it provides opportunities for future
researchers to conduct additional research’s on the factors that contribute to
construction waste generation
Customer profiling using classification approach for bank telemarketing
Telemarketing is a type of direct marketing where a salesperson contacts the customers to sell products or services over the phone. The database of prospective customers comes from direct marketing database. It is important for the company to predict the set of customers with highest probability to accept the sales or offer based on their personal characteristics or behaviour during shopping. Recently, companies have started to resort to data mining approaches for customer profiling. This project focuses on helping banks to increase the accuracy of their customer profiling through classification as well as identifying a group of customers who have a high probability to subscribe to a long-term deposit. In the experiments, three classification algorithms are used, which are Naïve Bayes, Random Forest, and Decision Tree. The experiments measured accuracy percentage, precision and recall rates and showed that classification is useful for predicting customer profiles and increasing telemarketing sales
Corporation robots
Nowadays, various robots are built to perform multiple tasks. Multiple robots working
together to perform a single task becomes important. One of the key elements for multiple
robots to work together is the robot need to able to follow another robot. This project is
mainly concerned on the design and construction of the robots that can follow line. In this
project, focuses on building line following robots leader and slave. Both of these robots will
follow the line and carry load. A Single robot has a limitation on handle load capacity such as
cannot handle heavy load and cannot handle long size load. To overcome this limitation an
easier way is to have a groups of mobile robots working together to accomplish an aim that
no single robot can do alon
- …