3 research outputs found

    SOLVING HEAT EQUATION IN ONE DIMENSION USING JORDAN RECURRENT NEURAL NETWORK

    Get PDF
    Наведено методологію для розв'язання крайових задач, а саме одновимірного рівняння теплопровідності за допомогою штучної нейронної мережі Джордана. Подано результати моделювання. В основі функції вартості мережі лежить метод Кранка-Ніколсона. Архітектура представленої нейронної мережі має класичну рекурентну структуру, але з додатковим прихованим шаром, в якій вузли вихідного шару мають ефект на вузли попереднього шару. Для навчання рекурентної мережі використано розширення стандартного алгоритму зворотного поширення – "Зворотного поширення в часі". Його отримано шляхом розгортання часових операцій мережі в багатошарову мережу прямого поширення, топологія якої розширюється на один шар на кожному часовому кроці. Мета навчання нейронної мережі є зведення до мінімуму неув'язки вихідного рівняння, яке описує проблему.Приведена методология для решения краевых задач, а именно одномерного уравнения теплопроводности с помощью искусственной нейронной сети Джордана. Представлены результаты моделирования. В основе функции стоимости сети лежит метод Кранка-Николсона. Архитектура представленной нейронной сети имеет классическую рекуррентную структуру, но с дополнительным скрытым слоем, в котором узлы выходного слоя имеют эффект на узлы предыдущего слоя. Для обучения рекуррентной сети использовано расширение стандартного алгоритма обратного распространения – "Back-propagation –through-time". Он получен путем развертывания временных операций сети в многослойную сеть прямого распространения, топология которой расширяется на один слой на каждом временном шаге. Фактически этот алгоритм подразумевает цепное правило дифференцирования, где производная от функции стоимости по выходам нейронов и их весам идет в обратном направлении через всю сеть. Целью рекуррентного обучения нейронной сети является сведение к минимуму невязки исходного уравнения, которое описывает проблему.The author has presented a methodology to solve boundary value problems, namely heat equation in one dimension using Jordan artificial neural networks. The results of simulation are presented. A recurrent neural network has been constructed for the implementation of the Crank – Nicolson method, which is the basis for constructed networks cost functions. The architecture of the presented Jordan neural network has almost classical recurrent structure where nodes of output layer have effect to nodes of former layer. The difference is that such a network contain additional hidden layer. This architecture was used to get better rate of convergence of neural network training method and also to have possibility to obtain output value from previous time step. For training a recurrent network an extension of the standart back-propagation algorithm was used referred to as "Back-propagation –through-time". It is derived by unfolding the temporal operation of the network into a layered feedforward network, the topology of which grows by one layer at every time step. In fact this algorithm imply simple chain rule of differentiation, where derivative of cost function with respect to neurons outputs and weights going backwards through the whole network. The purpose of the recurrent neural network training is to minimize the discrepancy of the original equation, which describe the problem

    Conception et analyse des performances d'un contrôleur flou pour un système de réglage de tension automatique

    Get PDF
    Ce mémoire porte sur le contrôle de la tension lors de l’insertion des systèmes de Générations d’Énergie Renouvelable (GER) spécifiquement sur les éoliennes et les panneaux solaires (PV). Le contrôle des GER est nécessaire quand ils sont connectés aux réseaux de distribution (Basse Tension BT) ou dans des réseaux isolés dans le but de garder la stabilité et la qualité de l’énergie fournie aux consommateurs. L’objectif d’insertion de ces GERs est de compenser la puissance manquante quand l’installation principale ne peut pas fournir cette énergie. Une pénétration trop élevée de puissance entraine des problèmes de surtension, selon la situation de charge de réseau. Cette pénétration peut conduire au non-respect des normes autorisées par le code électrique. Dans un premier temps, une étude faite sur les deux types de GERs éolienne-PV, ainsi leur schéma de commande correspondants aux flux de puissance active et réactive. Ensuite, une nouvelle méthode de contrôle est développée pour la régulation auto-adaptative basée sur l’accouplement du bloc d’adaptation de consigne et les réglages en mode P/Q et P/V. La conception du bloc d’adaptation est basée sur la logique floue. Quatre scénarios seront étudiés et simulés : •Deux éoliennes raccordées au réseau BT à deux emplacements diffèrent. •Deux panneaux solaires raccordés au réseau BT à deux emplacements diffèrent. •Le combiné des deux GERs raccordés au réseau de distribution. •Le combiné des deux GERs raccordés au réseau autonome. De nombreuses simulations sont présentées et montrent bien l’efficacité du contrôleur

    Connectionist Learning Based Numerical Solution of Differential Equations

    Get PDF
    It is well known that the differential equations are back bone of different physical systems. Many real world problems of science and engineering may be modeled by various ordinary or partial differential equations. These differential equations may be solved by different approximate methods such as Euler, Runge-Kutta, predictor-corrector, finite difference, finite element, boundary element and other numerical techniques when the problems cannot be solved by exact/analytical methods. Although these methods provide good approximations to the solution, they require a discretization of the domain via meshing, which may be challenging in two or higher dimension problems. These procedures provide solutions at the pre-defined points and computational complexity increases with the number of sampling points.In recent decades, various machine intelligence methods in particular connectionist learning or Artificial Neural Network (ANN) models are being used to solve a variety of real-world problems because of its excellent learning capacity. Recently, a lot of attention has been given to use ANN for solving differential equations. The approximate solution of differential equations by ANN is found to be advantageous but it depends upon the ANN model that one considers. Here our target is to solve ordinary as well as partial differential equations using ANN. The approximate solution of differential equations by ANN method has various inherent benefits in comparison with other numerical methods such as (i) the approximate solution is differentiable in the given domain, (ii) computational complexity does not increase considerably with the increase in number of sampling points and dimension of the problem, (iii) it can be applied to solve linear as well as non linear Ordinary Differential Equations (ODEs) and Partial Differential Equations (PDEs). Moreover, the traditional numerical methods are usually iterative in nature, where we fix the step size before the start of the computation. After the solution is obtained, if we want to know the solution in between steps then again the procedure is to be repeated from initial stage. ANN may be one of the ways where we may overcome this repetition of iterations. Also, we may use it as a black box to get numerical results at any arbitrary point in the domain after training of the model.Few authors have solved ordinary and partial differential equations by combining the feed forward neural network and optimization technique. As said above that the objective of this thesis is to solve various types of ODEs and PDEs using efficient neural network. Algorithms are developed where no desired values are known and the output of the model can be generated by training only. The architectures of the existing neural models are usually problem dependent and the number of nodes etc. are taken by trial and error method. Also, the training depends upon the weights of the connecting nodes. In general, these weights are taken as random number which dictates the training. In this investigation, firstly a new method viz. Regression Based Neural Network (RBNN) has been developed to handle differential equations. In RBNN model, the number of nodes in hidden layer may be fixed by using the regression method. For this, the input and output data are fitted first with various degree polynomials using regression analysis and the coefficients involved are taken as initial weights to start with the neural training. Fixing of the hidden nodes depends upon the degree of the polynomial.We have considered RBNN model for solving different ODEs with initial/boundary conditions. Feed forward neural model and unsupervised error back propagation algorithm have been used for minimizing the error function and modification of the parameters (weights and biases) without use of any optimization technique. Next, single layer Functional Link Artificial Neural Network (FLANN) architecture has been developed for solving differential equations for the first time. In FLANN, the hidden layer is replaced by a functional expansion block for enhancement of the input patterns using orthogonal polynomials such as Chebyshev, Legendre, Hermite, etc. The computations become efficient because the procedure does not need to have hidden layer. Thus, the numbers of network parameters are less than the traditional ANN model. Varieties of differential equations are solved here using the above mentioned methods to show the reliability, powerfulness, and easy computer implementation of the methods. As such singular nonlinear initial value problems such as Lane-Emden and Emden-Fowler type equations have been solved using Chebyshev Neural Network (ChNN) model. Single layer Legendre Neural Network (LeNN) model has also been developed to handle Lane-Emden equation, Boundary Value Problem (BVP) and system of coupled ordinary differential equations. Unforced Duffing oscillator and unforced Van der Pol-Duffing oscillator equations are solved by developing Simple Orthogonal Polynomial based Neural Network (SOPNN) model. Further, Hermite Neural Network (HeNN) model is proposed to handle the Van der Pol-Duffing oscillator equation. Finally, a single layer Chebyshev Neural Network (ChNN) model has also been implemented to solve partial differential equations
    corecore