3 research outputs found

    MONITORING DATA AGGREGATION OF DYNAMIC SYSTEMS USING INFORMATION TECHNOLOGIES

    Get PDF
    The subject matter of the article is models, methods and information technologies of monitoring data aggregation. The goal of the article is to determine the best deep learning model for reducing the dimensionality of dynamic systems monitoring data. The following tasks were solved: analysis of existing dimensionality reduction approaches, description of the general architecture of vanilla and variational autoencoders, development of their architecture, development of software for training and testing of autoencoders, conducting research on the performance quality of autoencoders for the problem of dimensionality reduction. The following models and methods were used: data processing and preparation, data dimensionality reduction. The software was developed using the Python language. Scikit-learn, Pandas, PyTorch, NumPy, argparse and others were used as auxiliary libraries. Obtained results: the work presents a classification of models and methods for dimensionality reduction, general reviews of vanilla and variational autoencoders, which include a description of the models, their properties, loss functions and their application to the problem of dimensionality reduction. Custom autoencoder architectures were also created, including visual representations of the autoencoder architecture and descriptions of each component. The software for training and testing autoencoders was developed, the dynamic system monitoring data set, and the steps for pre-training the data set were described. The metric for evaluating the quality of models is also described; the configuration of autoencoders and their training are considered. Conclusions: The vanilla autoencoder recovers the data much better than the variational one. Looking at the fact that the architectures of the autoencoders are the same, except for the peculiarities of the autoencoders, it can be noted that a vanilla autoencoder compresses data better by keeping more useful variables for later recovery from the bottleneck. Additionally, by training on different bottleneck sizes, you can determine the size at which the data is recovered best, which means that the most important variables are preserved. Looking at the results in general, the autoencoders work effectively for the dimensionality reduction task and the data recovery quality metric shows that they recover the data well with an error of 3–4 digits after 0. In conclusion, the vanilla autoencoder is the best deep learning model for aggregating monitoring data of dynamic systems

    Numerical Evaluation of Probability of Harmful Impact Caused by Toxic Spill Emergencies

    No full text
    The purpose of the work is to assess the degree of inhalation damage of a person exposed to the toxic cloud of liquefied gas evaporation from a spill spot of various shapes. The mathematical model of liquefied gas spill evaporation which arose as a result of accidental destruction of the storage capacity and further dispersion of the gas impurity in the atmosphere surface layer was developed. The computational technology for determining the fields of conditional probability of human inhalation damage by a toxic gas based on a probit analysis is developed. The mathematical model takes into account the flow compressibility, complex terrain, three-dimensional nature of the dispersion process, and the presence of toxic liquid substance evaporation from the arbitrary spill spot with varying intensity. The model allows obtaining space-time distributions of the toxic gas relative mass concentration and inhaled toxidosis which is necessary to determine the fields of the human damage probability based on the probit analysis. For different ellipticity of the hydrogen cyanide spill elliptical spot the fields of probability of human mortal damage are obtained and the influence of spot ellipticity on the scale of the consequences of an accident of this type is analysed. The developed technology allows carrying out automated analysis and forecasting in the time and space of the damage probability of a person exposed to the toxic gas as an indicator of the safety of the technogenic object

    Estimation of performance parameters of turbine engine components using experimental data in parametric uncertainty conditions

    No full text
    Gas Path Analysis and matching turbine engine models to experimental data are inverse problems of mathematical modelling which are characterized by parametric uncertainty. It results from the fact that the number of measured parameters is significantly less than the number of components’ performance parameters needed to describe the real engine. Inthese conditions, even small measurement errors can result in a high variation of results, and obtained efficiency, lossfactors etc. can appear out of the physical range. The paper presents new method for setting a priori information about the engine and its performance in view of fuzzy sets, forming objective functions and scalar convolutions synthesis of these functions to estimate gas-path components’ parameters. The comparison of the proposed approach with traditional methods showed that its main advantage is high stability of estimation in the parametric uncertainty conditions. It reduces scattering, excludes incorrect solutions which do not correspond to a priori assumptions, and also helps to implement the Gas Path Analysis at the limited number of measured parameters
    corecore