14,885 research outputs found

    Control vector parameterization with sensitivity based refinement applied to baking optimization

    Get PDF
    In bakery production, product quality attributes as crispness, brownness, crumb and water content are developed by the transformations that occur during baking and which are initiated by heating. A quality driven procedure requires process optimization to improve bakery production and to find operational procedures for new products. Control vector parameterization (CVP) is an effective method for the optimization procedure. However, for accurate optimization with a large number of parameters CVP optimization takes a long computation time. In this work, an improved method for direct dynamic optimization using CVP is presented. The method uses a sensitivity based step size refinement for the selection of control input parameters. The optimization starts with a coarse discretization level for the control input in time. In successive iterations the step size was refined for the parameters for which the performance index has a sensitivity value above a threshold value.With this selection, optimization is continued for a selected group of input parameters while the other nonsensitive parameters (below threshold) are kept constant. Increasing the threshold value lowers the computation time, however the obtained performance index becomes less. A threshold value in the range of 10–20% of the mean sensitivity satisfies well. The method gives a better solution for a lower computation effort than single run optimization with a large number of parameters or refinement procedures without selection

    Iterative design of dynamic experiments in modeling for optimization of innovative bioprocesses

    Get PDF
    Finding optimal operating conditions fast with a scarce budget of experimental runs is a key problem to speed up the development and scaling up of innovative bioprocesses. In this paper, a novel iterative methodology for the model-based design of dynamic experiments in modeling for optimization is developed and successfully applied to the optimization of a fed-batch bioreactor related to the production of r-interleukin-11 (rIL-11) whose DNA sequence has been cloned in an Escherichia coli strain. At each iteration, the proposed methodology resorts to a library of tendency models to increasingly bias bioreactor operating conditions towards an optimum. By selecting the ‘most informative’ tendency model in the sequel, the next dynamic experiment is defined by re-optimizing the input policy and calculating optimal sampling times. Model selection is based on minimizing an error measure which distinguishes between parametric and structural uncertainty to selectively bias data gathering towards improved operating conditions. The parametric uncertainty of tendency models is iteratively reduced using Global Sensitivity Analysis (GSA) to pinpoint which parameters are keys for estimating the objective function. Results obtained after just a few iterations are very promising.Fil: Cristaldi, Mariano Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; ArgentinaFil: Grau, Ricardo José Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo Tecnológico para la Industria Química. Universidad Nacional del Litoral. Instituto de Desarrollo Tecnológico para la Industria Química; ArgentinaFil: Martínez, Ernesto Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Desarrollo y Diseño. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Instituto de Desarrollo y Diseño; Argentin

    Monitoring Control Updating Period In Fast Gradient Based NMPC

    Full text link
    In this paper, a method is proposed for on-line monitoring of the control updating period in fast-gradient-based Model Predictive Control (MPC) schemes. Such schemes are currently under intense investigation as a way to accommodate for real-time requirements when dealing with systems showing fast dynamics. The method needs cheap computations that use the algorithm on-line behavior in order to recover the optimal updating period in terms of cost function decrease. A simple example of constrained triple integrator is used to illustrate the proposed method and to assess its efficiency.Comment: 6 pages, 8 Figure

    Connections Between Adaptive Control and Optimization in Machine Learning

    Full text link
    This paper demonstrates many immediate connections between adaptive control and optimization methods commonly employed in machine learning. Starting from common output error formulations, similarities in update law modifications are examined. Concepts in stability, performance, and learning, common to both fields are then discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis are provided. In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.Comment: 18 page

    A continuous analogue of the tensor-train decomposition

    Full text link
    We develop new approximation algorithms and data structures for representing and computing with multivariate functions using the functional tensor-train (FT), a continuous extension of the tensor-train (TT) decomposition. The FT represents functions using a tensor-train ansatz by replacing the three-dimensional TT cores with univariate matrix-valued functions. The main contribution of this paper is a framework to compute the FT that employs adaptive approximations of univariate fibers, and that is not tied to any tensorized discretization. The algorithm can be coupled with any univariate linear or nonlinear approximation procedure. We demonstrate that this approach can generate multivariate function approximations that are several orders of magnitude more accurate, for the same cost, than those based on the conventional approach of compressing the coefficient tensor of a tensor-product basis. Our approach is in the spirit of other continuous computation packages such as Chebfun, and yields an algorithm which requires the computation of "continuous" matrix factorizations such as the LU and QR decompositions of vector-valued functions. To support these developments, we describe continuous versions of an approximate maximum-volume cross approximation algorithm and of a rounding algorithm that re-approximates an FT by one of lower ranks. We demonstrate that our technique improves accuracy and robustness, compared to TT and quantics-TT approaches with fixed parameterizations, of high-dimensional integration, differentiation, and approximation of functions with local features such as discontinuities and other nonlinearities

    Progressive construction of a parametric reduced-order model for PDE-constrained optimization

    Full text link
    An adaptive approach to using reduced-order models as surrogates in PDE-constrained optimization is introduced that breaks the traditional offline-online framework of model order reduction. A sequence of optimization problems constrained by a given Reduced-Order Model (ROM) is defined with the goal of converging to the solution of a given PDE-constrained optimization problem. For each reduced optimization problem, the constraining ROM is trained from sampling the High-Dimensional Model (HDM) at the solution of some of the previous problems in the sequence. The reduced optimization problems are equipped with a nonlinear trust-region based on a residual error indicator to keep the optimization trajectory in a region of the parameter space where the ROM is accurate. A technique for incorporating sensitivities into a Reduced-Order Basis (ROB) is also presented, along with a methodology for computing sensitivities of the reduced-order model that minimizes the distance to the corresponding HDM sensitivity, in a suitable norm. The proposed reduced optimization framework is applied to subsonic aerodynamic shape optimization and shown to reduce the number of queries to the HDM by a factor of 4-5, compared to the optimization problem solved using only the HDM, with errors in the optimal solution far less than 0.1%
    corecore