2 research outputs found

    Fresh approaches to the construction of parameterized neural network solutions of a stiff differential equation

    Get PDF
    A number of new fundamental problems expanding Vasiliev's and Tarkhov's methodology worked out for neural network models constructed on the basis of differential equations and other data has been stated and solved in this paper. The possibility of extending the parameter range in the same neural network model without loss of accuracy was studied. The influence of the new approach to choosing test points and using heterogeneous complementary data on the solution accuracy was analyzed. The additional conditions in equation form derived from the asymptotic decomposition were used apart from the point data. The classical and non-classical definitions of the problem were compared by entering a parameter into the complementary data. A new sampling scheme of test point choice at different stages of minimization (the procedure of test point regeneration) under various initial conditions was investigated. A way of combining two approaches (classical and neural network) based on the Adams PECE method was considered

    MULTILAYER PARAMETRIC MODELS OF PROCESSES IN A POROUS CATALYST PELLET

    No full text
    In this paper, we perform a comparative analysis of new methods for constructing approximate solutions of differential equations. As a test problem, we chose the boundary value problem for a substantially nonlinear second-order differential equation. This problem arose when modeling the processes of heat and mass exchange in a flat granule of a porous catalyst. Previously, we solved this problem with the help of artificial neural networks, using it as a model problem for testing methods developed by us. Our generic neural network approach has been applied to this problem both in the case of constant parameters and parameters varying in some intervals. In the case of constant parameters, the result coincided with the data available in the literature on the subject. Models with variable parameters, which are part of the inputs of neural networks, were first built in our works. One of the significant drawbacks of this approach is the high resource intensity of neural network learning process. In this paper, we consider a new approach, which allows doing without the training procedure. Our approach is based on a modification of known numerical methods – on an application of classical formulas of the numerical solution of ordinary differential equations to an argument change interval with a variable upper limit. The result is an approximate mathematical model in the form of a function, and the parameters of the problem are among the arguments of the function. In this paper, we showed that the new methods have significant advantages. We have considered two such methods. One method is based on a neural network modification of the shooting method. The second method differs in that the shooting is conducted on both sides of the gap. The obtained models are characterized by simplicity and a wide range of parameters for which they are suitable. The models we have built can be easily adapted to observations of real objects. The models we have built can be easily adapted to data observations of real objects
    corecore