1,579 research outputs found
Data-driven model reduction-based nonlinear MPC for large-scale distributed parameter systems
This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this recordModel predictive control (MPC) has been effectively applied in process industries since the 1990s. Models in the form of closed equation sets are normally needed for MPC, but it is often difficult to obtain such formulations for large nonlinear systems. To extend nonlinear MPC (NMPC) application to nonlinear distributed parameter systems (DPS) with unknown dynamics, a data-driven model reduction-based approach is followed. The proper orthogonal decomposition (POD) method is first applied off-line to compute a set of basis functions. Then a series of artificial neural networks (ANNs) are trained to effectively compute POD time coefficients. NMPC, using sequential quadratic programming is then applied. The novelty of our methodology lies in the application of POD's highly efficient linear decomposition for the consequent conversion of any distributed multi-dimensional space-state model to a reduced 1-dimensional model, dependent only on time, which can be handled effectively as a black-box through ANNs. Hence we construct a paradigm, which allows the application of NMPC to complex nonlinear high-dimensional systems, even input/output systems, handled by black-box solvers, with significant computational efficiency. This paradigm combines elements of gain scheduling, NMPC, model reduction and ANN for effective control of nonlinear DPS. The stabilization/destabilization of a tubular reactor with recycle is used as an illustrative example to demonstrate the efficiency of our methodology. Case studies with inequality constraints are also presented.The authors would like to acknowledge the financial support of the EC FP6 Project: CONNECT [COOP-2006-31638] and the EC FP7 project CAFE [KBBE-212754]
Electrical Impedance Tomography: A Fair Comparative Study on Deep Learning and Analytic-based Approaches
Electrical Impedance Tomography (EIT) is a powerful imaging technique with
diverse applications, e.g., medical diagnosis, industrial monitoring, and
environmental studies. The EIT inverse problem is about inferring the internal
conductivity distribution of an object from measurements taken on its boundary.
It is severely ill-posed, necessitating advanced computational methods for
accurate image reconstructions. Recent years have witnessed significant
progress, driven by innovations in analytic-based approaches and deep learning.
This review explores techniques for solving the EIT inverse problem, focusing
on the interplay between contemporary deep learning-based strategies and
classical analytic-based methods. Four state-of-the-art deep learning
algorithms are rigorously examined, harnessing the representational
capabilities of deep neural networks to reconstruct intricate conductivity
distributions. In parallel, two analytic-based methods, rooted in mathematical
formulations and regularisation techniques, are dissected for their strengths
and limitations. These methodologies are evaluated through various numerical
experiments, encompassing diverse scenarios that reflect real-world
complexities. A suite of performance metrics is employed to assess the efficacy
of these methods. These metrics collectively provide a nuanced understanding of
the methods' ability to capture essential features and delineate complex
conductivity patterns. One novel feature of the study is the incorporation of
variable conductivity scenarios, introducing a level of heterogeneity that
mimics textured inclusions. This departure from uniform conductivity
assumptions mimics realistic scenarios where tissues or materials exhibit
spatially varying electrical properties. Exploring how each method responds to
such variable conductivity scenarios opens avenues for understanding their
robustness and adaptability
SCEE 2008 book of abstracts : the 7th International Conference on Scientific Computing in Electrical Engineering (SCEE 2008), September 28 – October 3, 2008, Helsinki University of Technology, Espoo, Finland
This report contains abstracts of presentations given at the SCEE 2008 conference.reviewe
The ADMM-PINNs Algorithmic Framework for Nonsmooth PDE-Constrained Optimization: A Deep Learning Approach
We study the combination of the alternating direction method of multipliers
(ADMM) with physics-informed neural networks (PINNs) for a general class of
nonsmooth partial differential equation (PDE)-constrained optimization
problems, where additional regularization can be employed for constraints on
the control or design variables. The resulting ADMM-PINNs algorithmic framework
substantially enlarges the applicable range of PINNs to nonsmooth cases of
PDE-constrained optimization problems. The application of the ADMM makes it
possible to untie the PDE constraints and the nonsmooth regularization terms
for iterations. Accordingly, at each iteration, one of the resulting
subproblems is a smooth PDE-constrained optimization which can be efficiently
solved by PINNs, and the other is a simple nonsmooth optimization problem which
usually has a closed-form solution or can be efficiently solved by various
standard optimization algorithms or pre-trained neural networks. The ADMM-PINNs
algorithmic framework does not require to solve PDEs repeatedly, and it is
mesh-free, easy to implement, and scalable to different PDE settings. We
validate the efficiency of the ADMM-PINNs algorithmic framework by different
prototype applications, including inverse potential problems, source
identification in elliptic equations, control constrained optimal control of
the Burgers equation, and sparse optimal control of parabolic equations
Variational Physics Informed Neural Networks: the role of quadratures and test functions
In this work we analyze how quadrature rules of different precisions and
piecewise polynomial test functions of different degrees affect the convergence
rate of Variational Physics Informed Neural Networks (VPINN) with respect to
mesh refinement, while solving elliptic boundary-value problems. Using a
Petrov-Galerkin framework relying on an inf-sup condition, we derive an a
priori error estimate in the energy norm between the exact solution and a
suitable high-order piecewise interpolant of a computed neural network.
Numerical experiments confirm the theoretical predictions and highlight the
importance of the inf-sup condition. Our results suggest, somehow
counterintuitively, that for smooth solutions the best strategy to achieve a
high decay rate of the error consists in choosing test functions of the lowest
polynomial degree, while using quadrature formulas of suitably high precision.Comment: 20 pages, 22 figure
- …