207 research outputs found

    Meta-Heuristic Optimization Methods for Quaternion-Valued Neural Networks

    Get PDF
    In recent years, real-valued neural networks have demonstrated promising, and often striking, results across a broad range of domains. This has driven a surge of applications utilizing high-dimensional datasets. While many techniques exist to alleviate issues of high-dimensionality, they all induce a cost in terms of network size or computational runtime. This work examines the use of quaternions, a form of hypercomplex numbers, in neural networks. The constructed networks demonstrate the ability of quaternions to encode high-dimensional data in an efficient neural network structure, showing that hypercomplex neural networks reduce the number of total trainable parameters compared to their real-valued equivalents. Finally, this work introduces a novel training algorithm using a meta-heuristic approach that bypasses the need for analytic quaternion loss or activation functions. This algorithm allows for a broader range of activation functions over current quaternion networks and presents a proof-of-concept for future work

    Meta-Heuristic Optimization Methods for Quaternion-Valued Neural Networks

    Get PDF
    In recent years, real-valued neural networks have demonstrated promising, and often striking, results across a broad range of domains. This has driven a surge of applications utilizing high-dimensional datasets. While many techniques exist to alleviate issues of high-dimensionality, they all induce a cost in terms of network size or computational runtime. This work examines the use of quaternions, a form of hypercomplex numbers, in neural networks. The constructed networks demonstrate the ability of quaternions to encode high-dimensional data in an efficient neural network structure, showing that hypercomplex neural networks reduce the number of total trainable parameters compared to their real-valued equivalents. Finally, this work introduces a novel training algorithm using a meta-heuristic approach that bypasses the need for analytic quaternion loss or activation functions. This algorithm allows for a broader range of activation functions over current quaternion networks and presents a proof-of-concept for future work

    Multilayer perceptron network optimization for chaotic time series modeling

    Get PDF
    Chaotic time series are widely present in practice, but due to their characteristics—such as internal randomness, nonlinearity, and long-term unpredictability—it is difficult to achieve high-precision intermediate or long-term predictions. Multi-layer perceptron (MLP) networks are an effective tool for chaotic time series modeling. Focusing on chaotic time series modeling, this paper presents a generalized degree of freedom approximation method of MLP. We then obtain its Akachi information criterion, which is designed as the loss function for training, hence developing an overall framework for chaotic time series analysis, including phase space reconstruction, model training, and model selection. To verify the effectiveness of the proposed method, it is applied to two artificial chaotic time series and two real-world chaotic time series. The numerical results show that the proposed optimized method is effective to obtain the best model from a group of candidates. Moreover, the optimized models perform very well in multi-step prediction tasks.This research was funded in part by the NSFC grant numbers 61972174 and 62272192, the Science-Technology Development Plan Project of Jilin Province grant number 20210201080GX, the Jilin Province Development and Reform Commission grant number 2021C044-1, the Guangdong Universities’ Innovation Team grant number 2021KCXTD015, and Key Disciplines Projects grant number 2021ZDJS138

    Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction

    Get PDF
    Cooperative coevolution decomposes a problem into subcomponents and employs evolutionary algorithms for solving them. Cooperative coevolution has been effective for evolving neural networks. Different problem decomposition methods in cooperative coevolution determine how a neural network is decomposed and encoded which affects its performance. A good problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. Neural networks have shown promising results in chaotic time series prediction. This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems. The Mackey-Glass, Lorenz and Sunspot time series are used to demonstrate the performance of the cooperative neuro-evolutionary methods. The results show improvement in performance in terms of accuracy when compared to some of the methods from literature

    Parameter Prediction for Lorenz Attractor by using Deep Neural Network

    Get PDF
    Nowadays, most modern deep learning models are based on artificial neural networks. This research presents Deep Neural Network to learn the database, which consists of high precision, a strange Lorenz attractor. Lorenz system is one of the simple chaotic systems, which is a nonlinear and characterized by an unstable dynamic behavior. The research aims to predict the parameter of a strange Lorenz attractor either yes or not. The primary method implemented in this paper is the Deep Neural Network by using Phyton Keras library. For the neural network, the different number of hidden layers are used to compare the accuracy of the system prediction. A set of data is used as the input of the neural network, while for the output part, the accuracy of prediction data is expected. As a result, the accuracy of the testing result shows that 100% correct prediction can be achieved when using the training data. Meanwhile, only 60% correct prediction is achieved for the new random data

    Trajectory prediction of moving objects by means of neural networks

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 1997Includes bibliographical references (leaves: 103-105)Text in English; Abstract: Turkish and Englishviii, 105 leavesEstimating the three-dimensional motion of an object from a sequence of object positions and orientation is of significant importance in variety of applications in control and robotics. For instance, autonomous navigation, manipulation, servo, tracking, planning and surveillance needs prediction of motion parameters. Although "motion estimation" is an old problem (the formulations date back to the beginning of the century), only recently scientists have provided with the tools from nonlinear system estimation theory to solve this problem eural Networks are the ones which have recently been used in many nonlinear dynamic system parameter estimation context. The approximating ability of the neural network is used to identifY the relation between system variables and parameters of a dynamic system. The position, velocity and acceleration of the object are estimated by several neural networks using the II most recent measurements of the object coordinates as input to the system Several neural network topologies with different configurations are introduced and utilized in the solution of the problem. Training schemes for each configuration are given in detail. Simulation results for prediction of motion having different characteristics via different architectures with alternative configurations are presented comparatively
    • …
    corecore