3,282 research outputs found
A hands-on introduction to Physics-Informed Neural Networks for solving partial differential equations with benchmark tests taken from astrophysics and plasma physics
I provide an introduction to the application of deep learning and neural
networks for solving partial differential equations (PDEs). The approach, known
as physics-informed neural networks (PINNs), involves minimizing the residual
of the equation evaluated at various points within the domain. Boundary
conditions are incorporated either by introducing soft constraints with
corresponding boundary data values in the minimization process or by strictly
enforcing the solution with hard constraints. PINNs are tested on diverse PDEs
extracted from two-dimensional physical/astrophysical problems. Specifically,
we explore Grad-Shafranov-like equations that capture magnetohydrodynamic
equilibria in magnetically dominated plasmas. Lane-Emden equations that model
internal structure of stars in sef-gravitating hydrostatic equilibrium are also
considered. The flexibility of the method to handle various boundary conditions
is illustrated through various examples, as well as its ease in solving
parametric and inverse problems. The corresponding Python codes based on
PyTorch/TensorFlow libraries are made available
Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications
Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance
Data comparison schemes for Pattern Recognition in Digital Images using Fractals
Pattern recognition in digital images is a common problem with application in
remote sensing, electron microscopy, medical imaging, seismic imaging and
astrophysics for example. Although this subject has been researched for over
twenty years there is still no general solution which can be compared with the
human cognitive system in which a pattern can be recognised subject to
arbitrary orientation and scale.
The application of Artificial Neural Networks can in principle provide a very
general solution providing suitable training schemes are implemented.
However, this approach raises some major issues in practice. First, the CPU
time required to train an ANN for a grey level or colour image can be very
large especially if the object has a complex structure with no clear geometrical
features such as those that arise in remote sensing applications. Secondly,
both the core and file space memory required to represent large images and
their associated data tasks leads to a number of problems in which the use of
virtual memory is paramount.
The primary goal of this research has been to assess methods of image data
compression for pattern recognition using a range of different compression
methods. In particular, this research has resulted in the design and
implementation of a new algorithm for general pattern recognition based on
the use of fractal image compression.
This approach has for the first time allowed the pattern recognition problem to
be solved in a way that is invariant of rotation and scale. It allows both ANNs
and correlation to be used subject to appropriate pre-and post-processing
techniques for digital image processing on aspect for which a dedicated
programmer's work bench has been developed using X-Designer
What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives
Intelligent Mesh Generation (IMG) represents a novel and promising field of
research, utilizing machine learning techniques to generate meshes. Despite its
relative infancy, IMG has significantly broadened the adaptability and
practicality of mesh generation techniques, delivering numerous breakthroughs
and unveiling potential future pathways. However, a noticeable void exists in
the contemporary literature concerning comprehensive surveys of IMG methods.
This paper endeavors to fill this gap by providing a systematic and thorough
survey of the current IMG landscape. With a focus on 113 preliminary IMG
methods, we undertake a meticulous analysis from various angles, encompassing
core algorithm techniques and their application scope, agent learning
objectives, data types, targeted challenges, as well as advantages and
limitations. We have curated and categorized the literature, proposing three
unique taxonomies based on key techniques, output mesh unit elements, and
relevant input data types. This paper also underscores several promising future
research directions and challenges in IMG. To augment reader accessibility, a
dedicated IMG project page is available at
\url{https://github.com/xzb030/IMG_Survey}
Hypersonic Vehicle Trajectory Optimization and Control
Two classes of neural networks have been developed for the study of hypersonic vehicle trajectory optimization and control. The first one is called an 'adaptive critic'. The uniqueness and main features of this approach are that: (1) they need no external training; (2) they allow variability of initial conditions; and (3) they can serve as feedback control. This is used to solve a 'free final time' two-point boundary value problem that maximizes the mass at the rocket burn-out while satisfying the pre-specified burn-out conditions in velocity, flightpath angle, and altitude. The second neural network is a recurrent network. An interesting feature of this network formulation is that when its inputs are the coefficients of the dynamics and control matrices, the network outputs are the Kalman sequences (with a quadratic cost function); the same network is also used for identifying the coefficients of the dynamics and control matrices. Consequently, we can use it to control a system whose parameters are uncertain. Numerical results are presented which illustrate the potential of these methods
GIT-Net: Generalized Integral Transform for Operator Learning
This article introduces GIT-Net, a deep neural network architecture for
approximating Partial Differential Equation (PDE) operators, inspired by
integral transform operators. GIT-NET harnesses the fact that differential
operators commonly used for defining PDEs can often be represented
parsimoniously when expressed in specialized functional bases (e.g., Fourier
basis). Unlike rigid integral transforms, GIT-Net parametrizes adaptive
generalized integral transforms with deep neural networks. When compared to
several recently proposed alternatives, GIT-Net's computational and memory
requirements scale gracefully with mesh discretizations, facilitating its
application to PDE problems on complex geometries. Numerical experiments
demonstrate that GIT-Net is a competitive neural network operator, exhibiting
small test errors and low evaluations across a range of PDE problems. This
stands in contrast to existing neural network operators, which typically excel
in just one of these areas
Machine Learning Techniques for Electrical Validation Enhancement Processes
Post-Silicon system margin validation consumes a significant amount of time and resources. To overcome this, a reduced validation plan for derivative products has previously been used. However, a certain amount of validation is still needed to avoid escapes, which is prone to subjective bias by the validation engineer comparing a reduced set of derivative validation data against the base product data. Machine Learning techniques allow, to perform automatic decisions and predictions based on already available historical data. In this work, we present an efficient methodology implemented with Machine Learning to make an automatic risk assessment decision and eye margin estimation measurements for derivative products, considering a large set of parameters obtained from the base product. The proposed methodology yields a high performance on the risk assessment decision and the estimation by regression, which translates into a significant reduction in time, effort, and resources
Computer Simulation of a Nitric Oxide-Releasing Catheter with a Novel Stable Convection-Diffusion Equation Solver and Automatic Quantification of Lung Ultrasound Comets by Machine Learning
Biological transport processes often involve a boundary acting as separation of flow, most commonly in transport involving blood-contacting medical devices. The separation of flow creates two different scenarios of mass transport across the interface. No flow exists within the medical device and diffusion governs mass transport; both convection and diffusion exist when flow is present. The added convection creates a large concentration gradient around the interface. Computer simulation of such cases prove to be difficult and require proper shock capturing methods for the solutions to be stable, which is typically lacking in commercial solvers. In this thesis, we propose a second-order accurate numerical method for solving the convection-diffusion equation by using a gradient-limited Godunov-type convective flux and the multi-point flux approximation (MPFA) L-Method for the diffusion flux. We applied our solver towards simulation of a nitric oxide-releasing intravascular catheter.
Intravascular catheters are essential for long-term vascular access in both diagnosis and treatment. Use of catheters are associated with risks for infection and thrombosis. Because infection and thrombosis lead to impaired flow and potentiality life threatening systemic infections, this leads to increased morbidity and mortality, requiring catheters to be replaced among other treatments for these complications. Nitric oxide (NO) is a potent antimicrobial and antithrombotic agent produced by vascular endothelial cells. The production level in vivo is so low that the physiological effects can only be seen around the endothelial cells. The catheter can incorporate a NO source in two major ways: by impregnating the catheter with NO-releasing compounds such as S-nitroso-N-acetyl penicillamine (SNAP) or using electrochemical reactions to generate NO from nitrites. We applied our solver to both situations to guide the design of the catheter. Simulations revealed that dissolved NO inside the catheter is depleted after 12 minutes without resupplying, and electrochemical release of NO requires 10.5 minutes to reach steady state.
Lung edema is often present in patients with end-stage renal disease due to reduced filtration functions of the kidney. These patients require regular dialysis sessions to manage their fluid status. The clinical gold standard to quantify lung edema is to use CT, which exposes patients to high amounts of radiation and is not cost efficient. Fluid management in such patients becomes very challenging without a clear guideline of fluid to be removed during dialysis sessions. Hypotension during dialysis can limit fluid removal, even in the setting of ongoing fluid overload or congestive heart failure. Accurate assessment of the pulmonary fluid status is needed, so that fluid overload and congestive heart failure can be detected, especially in the setting of hypotension, allowing dialysis to be altered to improve fluid removal.
Recently, reverberations in ultrasound signals, referred to as ``lung comets'' have emerged as a potential quantitative way to measure lung edema. Increased presence of lung comets is associated with higher amounts of pulmonary edema, higher mortality, and more adverse cardiac events. However, the lung comets are often counted by hand by physicians with single frames in lung ultrasound and high subjectivity has been found to exist among the counting by physicians. We applied image processing and neural network techniques as an attempt to provide an objective and accurate measurement of the amount of lung comets present. Our quantitative results are significantly correlated with diastolic blood pressure and ejection fraction.PHDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163182/1/micw_1.pd
- …