5 research outputs found

    Automated-tuned hyper-parameter deep neural network by using arithmetic optimization algorithm for Lorenz chaotic system

    Get PDF
    Deep neural networks (DNNs) are very dependent on their parameterization and require experts to determine which method to implement and modify the hyper-parameters value. This study proposes an automated-tuned hyper-parameter for DNN using a metaheuristic optimization algorithm, arithmetic optimization algorithm (AOA). AOA makes use of the distribution properties of mathematics’ primary arithmetic operators, including multiplication, division, addition, and subtraction. AOA is mathematically modeled and implemented to optimize processes across a broad range of search spaces. The performance of AOA is evaluated against 29 benchmark functions, and several real-world engineering design problems are to demonstrate AOA’s applicability. The hyper-parameter tuning framework consists of a set of Lorenz chaotic system datasets, hybrid DNN architecture, and AOA that works automatically. As a result, AOA produced the highest accuracy in the test dataset with a combination of optimized hyper-parameters for DNN architecture. The boxplot analysis also produced the ten AOA particles that are the most accurately chosen. Hence, AOA with ten particles had the smallest size of boxplot for all hyper-parameters, which concluded the best solution. In particular, the result for the proposed system is outperformed compared to the architecture tested with particle swarm optimization

    A new hybrid deep neural networks (DNN) algorithm for Lorenz chaotic system parameter estimation in image encryption

    Get PDF
    One of the greatest discoveries of the 20th century was the chaotic phenomenon, which has been a popular area of study up to this point. The Lorenz Attractor is a mathematical model that describes a chaotic system. It is a solution to a set of differential equations known as the Lorenz Equations, which Edward N. Lorenz originally introduced. Hybridizing the Deep Neural Network (DNN) with the K-Means Clustering algorithm will increase the accuracy and reduce the data complexity of the Lorenz dataset. Then, hyperparameters of DNN must be tuned to get the best setting for a given problem, and it becomes crucial to evaluate them to verify whether the model can accurately categorize the data. Furthermore, conventional encryption methods such as Data Encryption Standards (DES) are not adapted to image data because of their high redundancy and big capacity. The first research objective is to develop a new deep learning algorithm by a hybrid of DNN and K-Means Clustering algorithms for estimating the Lorenz chaotic system. Then, this study aims to optimize the hyperparameters of the developed DNN model using the Arithmetic Optimization Algorithm (AOA) and, lastly, to evaluate the performance of the newly proposed deep learning model with Simulated Kalman Filter (SKF) algorithm in solving image encryption application. This work uses a Lorenz dataset from Professor Roberto Barrio of the University of Zaragoza in Spain and focuses on multi-class classification. The dataset was split into training, testing, and validation datasets, comprising 70%, 15%, and 15% of the total. The research starts with developing the hybrid deep learning model consisting of DNN and a K-Means Clustering Algorithm. Then, the developed algorithm is implemented to estimate the parameters of the Lorenz system. In addition, the hyperparameter tuning problem is considered in this research to improve the developed hybrid model by using the AOA algorithm. Lastly, a new hybrid technique suggests tackling the current image encryption application problem by using the estimated parameters of chaotic systems with an optimization algorithm, the SKF algorithm. The fitness function used is the correlation function in the SKF algorithm to optimize the cipher image produced using the Lorenz system. Next, the thesis will be discussed about the findings of this study. As for accuracy, the developed model obtained 72.27% compared to 66.47% for the baseline model. Besides, the baseline model's loss value is 0.3661, while the developed model is 0.1712, lower than the standalone model. Hence, the clustering algorithm is performed well to enhance the accuracy of the model performances, as mentioned in the first objective. The combination of the first two objectives obtained the R2 value of 0.8054 and ρ value of 0.9912, which are higher than the standalone DNN model. Then, for the hybrid model, the Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) values are 0.1964 and 0.0045, respectively. Both error values are lower than the baseline model, 0.2913 and 0.1976. The findings showed that the model improved the model’s effectiveness and could predict the outcome accurately. This study also discusses the detailed analysis of the developed combined image encryption, including the statistical, security, and robustness analysis related to the third objective. The comparisons between seven image encryption schemes were discussed at the end of the subtopic. Based on the cropping attack’s findings, the proposed technique obtained higher Peak Signal Noise Ratio (PSNR) values for two conditions, which are 1/16 and 1/4 cropping ratios. At the same time, Zhou et al. performed a higher PSNR value for a 1/2 cropping ratio only. In conclusion, hybrid DNN with the K-Means Clustering Algorithm is proven to resolve parameter estimations of the chaotic system by developing an accurate prediction model

    Automated-tuned hyper-parameter deep neural network by using arithmetic optimization algorithm for Lorenz chaotic system

    Get PDF
    Deep neural networks (DNNs) are very dependent on their parameterization and require experts to determine which method to implement and modify the hyper-parameters value. This study proposes an automated-tuned hyper-parameter for DNN using a metaheuristic optimization algorithm, arithmetic optimization algorithm (AOA). AOA makes use of the distribution properties of mathematics’ primary arithmetic operators, including multiplication, division, addition, and subtraction. AOA is mathematically modeled and implemented to optimize processes across a broad range of search spaces. The performance of AOA is evaluated against 29 benchmark functions, and several real-world engineering design problems are to demonstrate AOA’s applicability. The hyper-parameter tuning framework consists of a set of Lorenz chaotic system datasets, hybrid DNN architecture, and AOA that works automatically. As a result, AOA produced the highest accuracy in the test dataset with a combination of optimized hyper-parameters for DNN architecture. The boxplot analysis also produced the ten AOA particles that are the most accurately chosen. Hence, AOA with ten particles had the smallest size of boxplot for all hyper-parameters, which concluded the best solution. In particular, the result for the proposed system is outperformed compared to the architecture tested with particle swarm optimization

    KungFu: Making Training in Distributed Machine Learning Adaptive

    Get PDF
    When using distributed machine learning (ML) systems to train models on a cluster of worker machines, users must con-figure a large number of parameters: hyper-parameters (e.g. the batch size and the learning rate) affect model convergence; system parameters (e.g. the number of workers and their communication topology) impact training performance. In current systems, adapting such parameters during training is ill-supported. Users must set system parameters at deployment time, and provide fixed adaptation schedules for hyper-parameters in the training program. We describe Kung Fu, a distributed ML library for Tensor-Flow that is designed to enable adaptive training. Kung Fu allows users to express high-level Adaptation Policies(APs)that describe how to change hyper- and system parameters during training. APs take real-time monitored metrics (e.g. signal-to-noise ratios and noise scale) as input and trigger control actions (e.g. cluster rescaling or synchronisation strategy updates). For execution, APs are translated into monitoring and control operators, which are embedded in the data flowgraph. APs exploit an efficient asynchronous collective communication layer, which ensures concurrency and consistency of monitoring and adaptation operation
    corecore