82,241 research outputs found

    Training a Feed-forward Neural Network with Artificial Bee Colony Based Backpropagation Method

    Full text link
    Back-propagation algorithm is one of the most widely used and popular techniques to optimize the feed forward neural network training. Nature inspired meta-heuristic algorithms also provide derivative-free solution to optimize complex problem. Artificial bee colony algorithm is a nature inspired meta-heuristic algorithm, mimicking the foraging or food source searching behaviour of bees in a bee colony and this algorithm is implemented in several applications for an improved optimized outcome. The proposed method in this paper includes an improved artificial bee colony algorithm based back-propagation neural network training method for fast and improved convergence rate of the hybrid neural network learning method. The result is analysed with the genetic algorithm based back-propagation method, and it is another hybridized procedure of its kind. Analysis is performed over standard data sets, reflecting the light of efficiency of proposed method in terms of convergence speed and rate.Comment: 14 Pages, 11 figure

    Automated classification of blasts in acute leukemia blood samples using HMLP network

    Get PDF
    This paper presents a study on classification of blasts in acute leukemia blood samples using artificial neural network.In acute leukemia there are two major forms that are acute myelogenous leukemia (AML) and acute lymphocytic leukemia (ALL).Six morphological features have been extracted from acute leukemia blood images and used as neural network inputs for the classification.Hybrid Multilayer Perceptron (HMLP) neural network was used to perform the classification task.The Hybrid Multilayer Perceptron(HMLP) neural network is trained using modified RPE(MRPE) training algorithm for 1474 data samples.The Hybrid Multilayer Perceptron (HMLP) neural network produces 97.04% performance accuracy.The result indicates the promising capabilities and abilities of the Hybrid Multilayer Perceptron (HMLP) neural network using modified RPE (MRPE) training algorithm for classifying and distinguishing the blasts from acute leukemia blood samples

    CSLM: Levenberg Marquardt based Back Propagation Algorithm Optimized with Cuckoo Search

    Get PDF
    Training an artificial neural network is an optimization task, since it is desired to find optimal weight sets for a neural network during training process. Traditional training algorithms such as back propagation have some drawbacks such as getting stuck in local minima and slow speed of convergence. This study combines the best features of two algorithms; i.e. Levenberg Marquardt back propagation (LMBP) and Cuckoo Search (CS) for improving the convergence speed of artificial neural networks (ANN) training. The proposed CSLM algorithm is trained on XOR and OR datasets. The experimental results show that the proposed CSLM algorithm has better performance than other similar hybrid variants used in this study

    Evaluation of Hybrid Prediction Models for Accurate Rate of Penetration (ROP) Prediction in Drilling Operations

    Get PDF
    The precise prediction of the rate of penetration (ROP) is of utmost importance for optimizing drilling operations and minimizing costs while increasing efficiency. However, the complex and nonlinear nature of the drilling process can pose significant challenges in achieving accurate ROP predictions. To address this challenge, multiple hybrid prediction models have been developed, and their accuracy in ROP prediction has been compared. To accomplish this objective, we created three different hybrid models, including Artificial Neural Network – Genetic Algorithm (ANN-GA), Artificial Neural Network-Particle Swarm Optimization (ANN-PSO), and Support Vector Regression (SVR) to estimate ROP. These models were trained and tested using drilling data collected from surface sensors, including drilling parameters such as weight on bit (WOB), revolutions per minute (RPM), flow rate, ROP, and drilling torque. The hybrid models were able to accurately estimate the ROP for the given drilling conditions and lithologies by utilizing these parameters. Furthermore, the models\u27 accuracy and effectiveness were assessed by training and testing them using the collected drilling data. Upon evaluating the performance of the three algorithms, our study shows that SVR (Support Vector Regression) outperformed ANN (Artificial Neural Network) in accuracy and precision when predicting the target variable. SVR consistently provided more accurate and precise predictions, capturing the underlying patterns in the data effectively. While ANN-GA (Artificial Neural Network with Genetic Algorithm) performed better than ANN-PSO (Artificial Neural Network with Particle Swarm Optimization) in the training dataset, it exhibited lower accuracy during testing. This highlights the importance of evaluating algorithm performance in both training and testing scenarios. The results also emphasize that complexity doesn\u27t always lead to better predictions. SVR offers a promising choice for accurate and reliable predictions, but further research is needed to explore the contrasting performances and optimize these algorithms.https://commons.und.edu/pe-pp/1003/thumbnail.jp

    Deep learning enhanced solar energy forecasting with AI-driven IoT

    Get PDF
    Short-term photovoltaic (PV) energy generation forecasting models are important, stabilizing the power integration between the PV and the smart grid for artificial intelligence- (AI-) driven internet of things (IoT) modeling of smart cities. With the recent development of AI and IoT technologies, it is possible for deep learning techniques to achieve more accurate energy generation forecasting results for the PV systems. Difficulties exist for the traditional PV energy generation forecasting method considering external feature variables, such as the seasonality. In this study, we propose a hybrid deep learning method that combines the clustering techniques, convolutional neural network (CNN), long short-term memory (LSTM), and attention mechanism with the wireless sensor network to overcome the existing difficulties of the PV energy generation forecasting problem. The overall proposed method is divided into three stages, namely, clustering, training, and forecasting. In the clustering stage, correlation analysis and self-organizing mapping are employed to select the highest relevant factors in historical data. In the training stage, a convolutional neural network, long short-term memory neural network, and attention mechanism are combined to construct a hybrid deep learning model to perform the forecasting task. In the testing stage, the most appropriate training model is selected based on the month of the testing data. The experimental results showed significantly higher prediction accuracy rates for all time intervals compared to existing methods, including traditional artificial neural networks, long short-term memory neural networks, and an algorithm combining long short-term memory neural network and attention mechanism

    An adaptive neuro-fuzzy propagation model for LoRaWAN

    Get PDF
    This article proposes an adaptive-network-based fuzzy inference system (ANFIS) model for accurate estimation of signal propagation using LoRaWAN. By using ANFIS, the basic knowledge of propagation is embedded into the proposed model. This reduces the training complexity of artificial neural network (ANN)-based models. Therefore, the size of the training dataset is reduced by 70% compared to an ANN model. The proposed model consists of an efficient clustering method to identify the optimum number of the fuzzy nodes to avoid overfitting, and a hybrid training algorithm to train and optimize the ANFIS parameters. Finally, the proposed model is benchmarked with extensive practical data, where superior accuracy is achieved compared to deterministic models, and better generalization is attained compared to ANN models. The proposed model outperforms the nondeterministic models in terms of accuracy, has the flexibility to account for new modeling parameters, is easier to use as it does not require a model for propagation environment, is resistant to data collection inaccuracies and uncertain environmental information, has excellent generalization capability, and features a knowledge-based implementation that alleviates the training process. This work will facilitate network planning and propagation prediction in complex scenarios

    A hybrid ANN-GA model to prediction of bivariate binary responses: Application to joint prediction of occurrence of heart block and death in patients with myocardial infarction

    Get PDF
    Background: In medical studies, when the joint prediction about occurrence of two events should be anticipated, a statistical bivariate model is used. Due to the limitations of usual statistical models, other methods such as Artificial Neural Network (ANN) and hybrid models could be used. In this paper, we propose a hybrid Artificial Neural Network-Genetic Algorithm (ANN-GA) model to prediction the occurrence of heart block and death in myocardial infarction (MI) patients simultaneously. Methods: For fitting and comparing the models, 263 new patients with definite diagnosis of MI hospitalized in Cardiology Ward of Hajar Hospital, Shahrekord, Iran, from March, 2014 to March, 2016 were enrolled. Occurrence of heart block and death were employed as bivariate binary outcomes. Bivariate Logistic Regression (BLR), ANN and hybrid ANN-GA models were fitted to data. Prediction accuracy was used to compare the models. The codes were written in Matlab 2013a and Zelig package in R3.2.2. Results: The prediction accuracy of BLR, ANN and hybrid ANN-GA models was obtained 77.7%, 83.69% and 93.85% for the training and 78.48%, 84.81% and 96.2% for the test data, respectively. In both training and test data set, hybrid ANN-GA model had better accuracy. Conclusions: ANN model could be a suitable alternative for modeling and predicting bivariate binary responses when the presuppositions of statistical models are not met in actual data. In addition, using optimization methods, such as hybrid ANN-GA model, could improve precision of ANN model. © 2016, Health Hamadan University of Medical Sciences. All rights reserved

    A new neural network training algorithm based on artificial bee colony algorithm for nonlinear system identification

    Get PDF
    Artificial neural networks (ANNs), one of the most important artificial intelligence techniques, are used extensively in modeling many types of problems. A successful training process is required to create effective models with ANN. An effective training algorithm is essential for a successful training process. In this study, a new neural network training algorithm called the hybrid artificial bee colony algorithm based on effective scout bee stage (HABCES) was proposed. The HABCES algorithm includes four fundamental changes. Arithmetic crossover was used in the solution generation mechanisms of the employed bee and onlooker bee stages. The knowledge of the global best solution was utilized by arithmetic crossover. Again, this solution generation mechanism also has an adaptive step size. Limit is an important control parameter. In the standard ABC algorithm, it is constant throughout the optimization. In the HABCES algorithm, it was determined dynamically depending on the number of generations. Unlike the standard ABC algorithm, the HABCES algorithm used a solution generation mechanism based on the global best solution in the scout bee stage. Through these features, the HABCES algorithm has a strong local and global convergence ability. Firstly, the performance of the HABCES algorithm was analyzed on the solution of global optimization problems. Then, applications on the training of the ANN were carried out. ANN was trained using the HABCES algorithm for the identification of nonlinear static and dynamic systems. The performance of the HABCES algorithm was compared with the standard ABC, aABC and ABCES algorithms. The results showed that the performance of the HABCES algorithm was better in terms of solution quality and convergence speed. A performance increase of up to 69.57% was achieved by using the HABCES algorithm in the identification of static systems. This rate is 46.82% for the identification of dynamic systems

    Evolutionary optimization of neural networks with heterogeneous computation: study and implementation

    Full text link
    In the optimization of artificial neural networks (ANNs) via evolutionary algorithms and the implementation of the necessary training for the objective function, there is often a trade-off between efficiency and flexibility. Pure software solutions on general-purpose processors tend to be slow because they do not take advantage of the inherent parallelism, whereas hardware realizations usually rely on optimizations that reduce the range of applicable network topologies, or they attempt to increase processing efficiency by means of low-precision data representation. This paper presents, first of all, a study that shows the need of heterogeneous platform (CPU–GPU–FPGA) to accelerate the optimization of ANNs using genetic algorithms and, secondly, an implementation of a platform based on embedded systems with hardware accelerators implemented in Field Pro-grammable Gate Array (FPGA). The implementation of the individuals on a remote low-cost Altera FPGA allowed us to obtain a 3x–4x acceleration compared with a 2.83 GHz Intel Xeon Quad-Core and 6x–7x compared with a 2.2 GHz AMD Opteron Quad-Core 2354.The translation of this paper was funded by the Universitat Politecnica de Valencia, Spain.Fe, JD.; Aliaga Varea, RJ.; Gadea Gironés, R. (2015). Evolutionary optimization of neural networks with heterogeneous computation: study and implementation. The Journal of Supercomputing. 71(8):2944-2962. doi:10.1007/s11227-015-1419-7S29442962718Farmahini-Farahani A, Vakili S, Fakhraie SM, Safari S, Lucas C (2010) Parallel scalable hardware implementation of asynchronous discrete particle swarm optimization. Eng Appl Artif Intell 23(2):177–187Curteanu S, Cartwright H (2011) Neural networks applied in chemistry. i. Determination of the optimal topology of multilayer perceptron neural networks. J Chemom 25(10):527–549. doi: 10.1002/cem.1401Islam MM, Sattar MA, Amin MF, Yao X, Murase K (2009) A new adaptive merging and growing algorithm for designing artificial neural networks. Ieee Trans Syst Man Cybern Part B-Cybern 39(3):705–722Han KH, Kim JH (2004) Quantum-inspired evolutionary algorithms with a new termination criterion, h-epsilon gate, and two-phase scheme. Ieee Trans Evol Comput 8(2):156–169Leung FHF, Lam HK, Ling SH, Tam PKS (2003) Tuning of the structure and parameters of a neural network using an improved genetic algorithm. Ieee Trans Neural Netw 14(1):79–88Tsai JT, Chou JH, Liu TK (2006) Tuning the structure and parameters of a neural network by using hybrid taguchi-genetic algorithm. Ieee Trans Neural Netw 17(1):69–80Ludermir TB, Yamazaki A, Zanchettin C (2006) An optimization methodology for neural network weights and architectures. Ieee Trans Neural Netw 17(6):1452–1459Palmes PP, Hayasaka T, Usui S (2005) Mutation-based genetic neural network. Trans Neural Netw 16(3):587–600. doi: 10.1109/TNN.2005.844858Mu T, Jiang J, Wang Y, Goulermas JY (2012) Adaptive data embedding framework for multiclass classification. Ieee Trans Neural Netw Learn Syst 23(8):1291–1303Lu T-C, Yu G-R, Juang J-C (2013) Quantum-based algorithm for optimizing artificial neural networks. IEEE Trans Neural Netw Lear Syst 24(8):1266–1278Yao X (1999) Evolving artificial neural networks. Proc Ieee 87(9):1423–1447Yao X, Liu Y (1997) A new evolutionary system for evolving artificial neural networks. Ieee Trans Neural Netw 8(3):694–713Mateo F, Sovilj D, Gadea-Gironés R (2010) Approximate k-NN delta test minimization method using genetic algorithms: application to time series. NEUROCOMPUTING 73(10–12, Sp):2017–2029Hawkins S, He H, Williams G, Baxter R (2002) Outlier detection using replicator neural networks. In: Proceedings of the 5th international conference and data warehousing and knowledge discovery. DaWaK02, pp 170–180Fe J, Aliaga RJ, Gironés RG (2013) Experimental platform for accelerate the training of anns with genetic algorithm and embedded system on fpga. In: IWINAC (2), pp 413–420Prechelt L (1994) Proben1—a set of neural network benchmark problems and benchmarking rules. Technical reportAbbass HA (2002) An evolutionary artificial neural networks approach for breast cancer diagnosis. Artif Intell Med 25:265–281Ahmad F, Isa NAM, Hussain Z, Sulaiman SN (2013) A genetic algorithm-based multi-objective optimization of an artificial neural network classifier for breast cancer diagnosis. Neural Comput Appl 23(5):1427–1435Sankaradas M, Jakkula V, Cadambi S, Chakradhar S, Durdanovic I, Cosatto E, Graf H (2009) A massively parallel coprocessor for convolutional neural networks. In: Application-specific systems, architectures and processors, 2009. ASAP 2009. 20th IEEE international conference on, July, pp 53–60Prado R, Melo J, Oliveira J, Neto A (2012) Fpga based implementation of a fuzzy neural network modular architecture for embedded systems. In: Neural networks (IJCNN), The 2012 international joint conference on, June, pp 1–7Çavuşlu M, Karakuzu C, Sahin S, Yakut M (2011) Neural network training based on fpga with floating point number format and its performance. Neural Comput Appl 20:195–202. doi: 10.1007/s00521-010-0423-3Wu G-D, Zhu Z-W, Lin B-W (2011) Reconfigurable back propagation based neural network architecture. In: Integrated circuits (ISIC), 2011 13th international symposium on, Dec, pp 67–70Pinjare SL, Kumar A (2012) Implementation of neural network back propagation training algorithm on fpga. Int J Comput Appl 52(6): 1–7, August, published by Foundation of Computer Science, New York, USAhttp://www.altera.comAliaga R, Gadea R, Colom R, Cerda J, Ferrando N, Herrero V (2009) A mixed hardware–software approach to flexible artificial neural network training on fpga. In: Systems, architectures, modeling, and simulation, 2009. SAMOS ’09. International symposium on, July, pp 1–8http://www.matlab.co
    corecore