407 research outputs found
Guaranteed parameter estimation in nonlinear dynamic systems using improved bounding techniques
This paper is concerned with guaranteed parameter estimation in nonlinear dynamic systems in a context of bounded measurement error. The problem consists of finding - or approximating as closely as possible - the set of all possible parameter values such that the predicted outputs match the corresponding measurements within prescribed error bounds. An exhaustive search procedure is applied, whereby the parameter set is successively partitioned into smaller boxes and exclusion tests are performed to eliminate some of these boxes, until a prespecified threshold on the approximation level is met. Exclusion tests rely on the ability to bound the solution set of the dynamic system for a given parameter subset and the tightness of these bounds is therefore paramount. Equally important is the time required to compute the bounds, thereby defining a trade-off. It is the objective of this paper to investigate this trade-off by comparing various bounding techniques based on interval arithmetic, Taylor model arithmetic and ellipsoidal calculus. When applied to a simple case study, ellipsoidal and Taylor model approaches are found to reduce the number of iterations significantly compared to interval analysis, yet the overall computational time is only reduced for tight approximation levels due to the computational overhead. Β© 2013 EUCA
Chebyshev model arithmetic for factorable functions
This article presents an arithmetic for the computation of Chebyshev models for factorable functions and an analysis of their convergence properties. Similar to Taylor models, Chebyshev models consist of a pair of a multivariate polynomial approximating the factorable function and an interval remainder term bounding the actual gap with this polynomial approximant. Propagation rules and local convergence bounds are established for the addition, multiplication and composition operations with Chebyshev models. The global convergence of this arithmetic as the polynomial expansion order increases is also discussed. A generic implementation of Chebyshev model arithmetic is available in the library MC++. It is shown through several numerical case studies that Chebyshev models provide tighter bounds than their Taylor model counterparts, but this comes at the price of extra computational burden
Batch process optimization via run-to-run constraints adaptation
Β© 2007 EUCA.In the batch process industry, the available models carry a large amount of uncertainty and can seldom be used to directly optimize real processes. Several measurement-based optimization methods have been proposed to deal with model mismatch and process disturbances. Constraints often play a dominant role in the dynamic optimization of batch processes. In their presence, the optimal input profiles are characterized by a set of arcs, switching times and active path and terminal constraints. This paper presents a novel method tailored to those problems where the potential of optimization arises mainly from the correct set of path and terminal constraints being active. The input profiles are computed between successive runs by dynamic optimization of a fixed nominal model, and the constraints in the optimization problem are adapted using measured information from previous batches. Note that, unlike many existing optimization schemes, the measurements are not used to update the process model. Moreover, the proposed approach has the potential to uncover the optimal input structure. This is demonstrated on a simple semi-batch reactor example
Global optimization in Hilbert space
We propose a complete-search algorithm for solving a class of non-convex, possibly infinite-dimensional, optimization problems to global optimality. We assume that the optimization variables are in a bounded subset of a Hilbert space, and we determine worst-case run-time bounds for the algorithm under certain regularity conditions of the cost functional and the constraint set. Because these run-time bounds are independent of the number of optimization variables and, in particular, are valid for optimization problems with infinitely many optimization variables, we prove that the algorithm converges to an (Formula presented.)-suboptimal global solution within finite run-time for any given termination tolerance (Formula presented.). Finally, we illustrate these results for a problem of calculus of variations
ΠΠΎΠ»ΠΈΠΌΠΈΠ½Π΅ΡΠ°Π»ΡΠ½Π°Ρ ΡΡΠ»ΠΎΠ²Π°Ρ Π·ΠΎΠ½Π° ΠΎΠΊΠΎΠ»ΠΎΠΆΠΈΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΌΠ΅ΡΠ°ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΠΎΡΠ΅ΠΎΠ»Π° Π² ΠΌΠ΅Π·ΠΎΡΠ΅ΡΠΌΠ°Π»ΡΠ½ΠΎΠΌ ΠΌΠ΅ΡΡΠΎΡΠΎΠΆΠ΄Π΅Π½ΠΈΠΈ Π·ΠΎΠ»ΠΎΡΠ° ΠΡΠ½-Π₯ΠΎΠ»Π±Π° (ΠΠΎΡΡΠΎΡΠ½ΡΠΉ Π‘Π°ΡΠ½)
Estimating the parameters of a dynamical system based on measurements is an important task in industrial and scientific practice. Since a model's quality is directly linked to its parameter values, obtaining globally rather than locally optimal values is especially important in this context. In practice, however, local methods, are used almost exclusively. This is mainly due to the high computational cost of global dynamic parameter estimation, which limits its application to relatively small problems comprising no more than a few equations and parameters. In addition, there is still a lack of software packages that allow global parameter estimation in dynamical systems without expert knowledge. Therefore, we propose an efficient computational method for obtaining globally optimal parameter estimates of dynamical systems using well-established, user-friendly software packages. The method is based on the so-called incremental identification procedure, in combination with deterministic global optimization tools for nonlinear programs
- β¦