1,156 research outputs found
ON A DEFUZZIFICATION PROCESS OF FUZZY CONTROLLERS
In this paper, innovations in the field of automatic control systems with fuzzy controllers have been considered. After a short introduction on fuzzy controllers, four different ways of a defuzzification process were introduced, and verified on the simulation of nuclear reactor fuzzy controller. The default Matlab fuzzy toolbox solution is timely most demanding, while two solutions based on the defuzzification on trapezoidal fuzzy numbers have the advantage in the process of crisp numbers calculation. Also, a solution based on the determination of the line dividing the obtained polygon into two parts of equal areas is presented.
On numerical and analytical solutions of the generalized Burgers-Fisher equation
In this paper, the semi-analytic iterative and modified simple equation methods have been implemented to obtain solutions to the generalized Burgers-Fisher equation. To demonstrate the accuracy, efficacy as well as reliability of the methods in finding the exact solution of the equation, a selection of numerical examples was given and a comparison was made with other well-known methods from the literature such as variational iteration method, homotopy perturbation method and diagonally implicit Runge-Kutta method. The results have shown that between the proposed methods, the modified simple equation method is much faster, easier, more concise and straightforward for solving nonlinear partial differential equations, as it does not require the use of any symbolic computation software such as Maple or Mathematica. Additionally, the iterative procedure of the semi-analytic iterative method has merit in that each solution is an improvement of the previous iterate and as more and more iterations are taken, the solution converges to the exact solution of the equation
XIST IS A DANGER SIGNAL UNDERLYING THE SEX BIAS IN SYSTEMIC AUTOIMMUNITY
Female sex is associated with enhanced immune responses to viral infection and vaccinations, but also an increased risk of autoimmune disease. A major reason for these sex differences is that females display stronger Type-I interferon (IFN) responses than males. While IFN is critical in alerting the adaptive immune system to viral infections, excessive or inappropriate IFN signaling can drive systemic autoimmunity. Many systemic autoimmune diseases are characterized by aberrant IFN signaling, and the IFN response is particularly prominent in systemic lupus erythematosus (SLE).
SLE is one of the most sex-biased diseases identified to date, affecting 9-times more women than men. SLE is caused by aberrant IFN signaling due to the chronic, inappropriate activation of toll-like receptors (TLRs) by self-RNA and self-DNA. The single-stranded RNA (ssRNA) sensor TLR7 is especially heavily implicated in SLE pathogenesis, as genetic abnormalities in TLR7 have been shown to induce SLE in both mice and humans.
X-inactive specific transcript (XIST) is a long non-coding RNA (lncRNA) that is responsible for X-inactivation, the process by which X-linked gene expression is normalized between biological women, who have two X chromosomes, and biological men, who have one. XIST has been studied for a protective role in SLE due to its role in restricting TLR7 expression, but it has also been found to be overexpressed in SLE. Furthermore, recent studies revealed abnormal XIST localization in SLE and other autoimmune diseases like systemic sclerosis (SSc).
In this thesis, we elucidate a novel, proinflammatory function for the XIST RNA as a TLR7-dependent danger signal. In Chapter II, we show that XIST RNA is capable of inducing IFN, and that its expression correlates with SLE disease status, clinical disease activity, and the IFN signature in female SLE patients. In Chapter III, we explore XIST expression in SSc, and show that its expression and failure to fully inactivate TLR7 may underlie certain SSc disease subtypes. And in Chapter IV, we explore neutrophils and Neutrophil extracellular traps (NETs) as a potentially important source of extracellular XIST RNA in systemic autoimmunity
The generation, propagation, and mixing of oceanic lee waves
Lee waves are generated when oceanic flows interact with rough seafloor topography. They extract momentum and energy from the geostrophic flow, causing drag and enhancing turbulent mixing in the ocean interior when they break. Mixing across density surfaces (diapycnal mixing) driven by lee waves and other topographic interaction processes in the abyssal ocean plays an important role in upwelling the densest waters in the global ocean, thus sustaining the lower cell of the meridional overturning circulation.
Lee waves are generated at spatial scales that are unresolved by global models, so their impact on the momentum and buoyancy budgets of the ocean through drag and diapycnal mixing must be parameterised. Linear theory is often used to estimate the generation rate of lee waves and to construct global maps of lee wave generation. However, this calculation and subsequent inferences of lee wave mixing rely on several restrictive assumptions. Furthermore, observations suggest that lee wave mixing in the deep ocean is significantly overestimated by this theory. In this thesis, we remove some common assumptions at each stage of the lee wave lifecycle to investigate the reasons for this discrepancy and to motivate and inform future climate model parameterisations.
Firstly, we investigate the way that seafloor topography is represented in lee wave parameterisations, finding that typical spectral methods can lead to an overestimate of wave energy flux. Next, we make the case for considering lee waves as a full water column process by modelling the effect of vertically varying background flows and the ocean surface on lee wave propagation. Finally, we take a holistic view of topographic mixing in the abyssal ocean, finding that deep stratified water mass interfaces may modify the nature of the lee wave field, and themselves contribute to mixing and upwelling in the deep ocean through topographic interaction.Open Acces
Proximal Galerkin: A structure-preserving finite element method for pointwise bound constraints
The proximal Galerkin finite element method is a high-order, low iteration
complexity, nonlinear numerical method that preserves the geometric and
algebraic structure of bound constraints in infinite-dimensional function
spaces. This paper introduces the proximal Galerkin method and applies it to
solve free boundary problems, enforce discrete maximum principles, and develop
scalable, mesh-independent algorithms for optimal design. The paper leads to a
derivation of the latent variable proximal point (LVPP) algorithm: an
unconditionally stable alternative to the interior point method. LVPP is an
infinite-dimensional optimization algorithm that may be viewed as having an
adaptive barrier function that is updated with a new informative prior at each
(outer loop) optimization iteration. One of the main benefits of this algorithm
is witnessed when analyzing the classical obstacle problem. Therein, we find
that the original variational inequality can be replaced by a sequence of
semilinear partial differential equations (PDEs) that are readily discretized
and solved with, e.g., high-order finite elements. Throughout this work, we
arrive at several unexpected contributions that may be of independent interest.
These include (1) a semilinear PDE we refer to as the entropic Poisson
equation; (2) an algebraic/geometric connection between high-order
positivity-preserving discretizations and certain infinite-dimensional Lie
groups; and (3) a gradient-based, bound-preserving algorithm for two-field
density-based topology optimization. The complete latent variable proximal
Galerkin methodology combines ideas from nonlinear programming, functional
analysis, tropical algebra, and differential geometry and can potentially lead
to new synergies among these areas as well as within variational and numerical
analysis
On Fast Simulation of Dynamical System with Neural Vector Enhanced Numerical Solver
The large-scale simulation of dynamical systems is critical in numerous
scientific and engineering disciplines. However, traditional numerical solvers
are limited by the choice of step sizes when estimating integration, resulting
in a trade-off between accuracy and computational efficiency. To address this
challenge, we introduce a deep learning-based corrector called Neural Vector
(NeurVec), which can compensate for integration errors and enable larger time
step sizes in simulations. Our extensive experiments on a variety of complex
dynamical system benchmarks demonstrate that NeurVec exhibits remarkable
generalization capability on a continuous phase space, even when trained using
limited and discrete data. NeurVec significantly accelerates traditional
solvers, achieving speeds tens to hundreds of times faster while maintaining
high levels of accuracy and stability. Moreover, NeurVec's simple-yet-effective
design, combined with its ease of implementation, has the potential to
establish a new paradigm for fast-solving differential equations based on deep
learning.Comment: Accepted by Scientific Repor
Experimental and industrial method of synthesis of optimal control of the temperature region of cupola melting
The object of research is the temperature regime of melting in a cupola. The synthesis of optimal control of such an object is associated with the presence of a problem consisting in the complexity of its mathematical description and the absence of procedures that allow one to obtain optimal control laws. These problems are due to the presence of links with a pure delay, non-additive random drift, and difficulties in controlling the process parameters, in particular, accurately determining the temperature profile along the horizons and the periphery of the working space of the cupola.
The proposed conceptual solution for the synthesis of optimal temperature control allows the use of two levels of control: the level controller solves the problem of maintaining the constant height of the idle charge, and the problem of increasing the temperature of cast iron is solved by controlling the air supply to the tuyere box.
It is shown that the problem of regulating the upper level of an idle charge can be solved by reducing the model of the regulation process to a typical form, followed by the use of the Pontryagin maximum principle.
A procedure for the synthesis of optimal air flow control is proposed, which makes it possible to obtain the temperature regime control law on the basis of experimental industrial studies preceding the synthesis process. This takes into account the time delay between the impact on the object and its reaction, which makes it possible to predict the temperature value one step acharge, equal to the time interval during which the lower surface of the fuel charge reaches the upper surface of the level of the idle charge.
A procedure for temperature profile control based on the use of D-optimal plans for selecting sensor installation points is proposed. Due to this, it becomes possible to determine the temperature profile of the cupola according to its horizons and the periphery of the working space of the cupola with maximum accuracy.
The proposed synthesis method can be used in iron foundries equipped with cupolas, as it is a tool for studying a real production process, taking into account its specific conditions. This will allow developing or improving control systems for cupola melting, implementing different control modes: manual, automated or automati
Machine Learning Enhanced Hankel Dynamic-Mode Decomposition
While the acquisition of time series has become more straightforward,
developing dynamical models from time series is still a challenging and
evolving problem domain. Within the last several years, to address this
problem, there has been a merging of machine learning tools with what is called
the dynamic mode decomposition (DMD). This general approach has been shown to
be an especially promising avenue for accurate model development. Building on
this prior body of work, we develop a deep learning DMD based method which
makes use of the fundamental insight of Takens' Embedding Theorem to build an
adaptive learning scheme that better approximates higher dimensional and
chaotic dynamics. We call this method the Deep Learning Hankel DMD (DLHDMD). We
likewise explore how our method learns mappings which tend, after successful
training, to significantly change the mutual information between dimensions in
the dynamics. This appears to be a key feature in enhancing the DMD overall,
and it should help provide further insight for developing other deep learning
methods for time series analysis and model generation
Nonlinear dimensionality reduction then and now: AIMs for dissipative PDEs in the ML era
This study presents a collection of purely data-driven workflows for
constructing reduced-order models (ROMs) for distributed dynamical systems. The
ROMs we focus on, are data-assisted models inspired by, and templated upon, the
theory of Approximate Inertial Manifolds (AIMs); the particular motivation is
the so-called post-processing Galerkin method of Garcia-Archilla, Novo and
Titi. Its applicability can be extended: the need for accurate truncated
Galerkin projections and for deriving closed-formed corrections can be
circumvented using machine learning tools. When the right latent variables are
not a priori known, we illustrate how autoencoders as well as Diffusion Maps (a
manifold learning scheme) can be used to discover good sets of latent variables
and test their explainability. The proposed methodology can express the ROMs in
terms of (a) theoretical (Fourier coefficients), (b) linear data-driven (POD
modes) and/or (c) nonlinear data-driven (Diffusion Maps) coordinates. Both
Black-Box and (theoretically-informed and data-corrected) Gray-Box models are
described; the necessity for the latter arises when truncated Galerkin
projections are so inaccurate as to not be amenable to post-processing. We use
the Chafee-Infante reaction-diffusion and the Kuramoto-Sivashinsky dissipative
partial differential equations to illustrate and successfully test the overall
framework.Comment: 27 pages, 22 figure
Physics-informed neural networks modeling for systems with moving immersed boundaries: application to an unsteady flow past a plunging foil
Recently, physics informed neural networks (PINNs) have been explored
extensively for solving various forward and inverse problems and facilitating
querying applications in fluid mechanics applications. However, work on PINNs
for unsteady flows past moving bodies, such as flapping wings is scarce.
Earlier studies mostly relied on transferring to a body attached frame of
reference which is restrictive towards handling multiple moving bodies or
deforming structures. Hence, in the present work, an immersed boundary aware
framework has been explored for developing surrogate models for unsteady flows
past moving bodies. Specifically, simultaneous pressure recovery and velocity
reconstruction from Immersed boundary method (IBM) simulation data has been
investigated. While, efficacy of velocity reconstruction has been tested
against the fine resolution IBM data, as a step further, the pressure recovered
was compared with that of an arbitrary Lagrange Eulerian (ALE) based solver.
Under this framework, two PINN variants, (i) a moving-boundary-enabled standard
Navier-Stokes based PINN (MB-PINN), and, (ii) a moving-boundary-enabled IBM
based PINN (MB-IBM-PINN) have been formulated. A fluid-solid partitioning of
the physics losses in MB-IBM-PINN has been allowed, in order to investigate the
effects of solid body points while training. This enables MB-IBM-PINN to match
with the performance of MB-PINN under certain loss weighting conditions.
MB-PINN is found to be superior to MB-IBM-PINN when {\it a priori} knowledge of
the solid body position and velocity are available. To improve the data
efficiency of MB-PINN, a physics based data sampling technique has also been
investigated. It is observed that a suitable combination of physics constraint
relaxation and physics based sampling can achieve a model performance
comparable to the case of using all the data points, under a fixed training
budget
- …