296,892 research outputs found

    Control Relevant System Identification Using Orthonormal Basis Filter Models

    Get PDF
    Models are extensively used in advanced process control system design and implementations. Nearly all optimal control design techniques including the widely used model predictive control techniques rely on the use of model of the system to be controlled. There are several linear model structures that are commonly used in control relevant problems in process industries. Some of these model structures are: Auto Regressive with Exogenous Input (ARX), Auto Regressive Moving Average with Exogenous Input (ARMAX), Finite Impulse Response (FIR), Output Error (OE) and Box Jenkins (BJ) models. The selection of the appropriate model structure, among other factors, depend on the consistency of the model parameters, the number of parameters required to describe a system with acceptable accuracy and the computational load in estimating the model parameters. ARX and ARMAX models suffer from inconsistency problem in most open-loop identification problems. Finite Impulse Response (FIR) models require large number of parameters to describe linear systems with acceptable accuracy. BJ, OE and ARMAX models involve nonlinear optimization in estimating their parameters. In addition, all of the above conventional linear models, except FIR, require the time delay of the system to be separately estimated and included in the estimation of the parameters. Orthonormal Basis Filter (OBF) models have several advantages over the other conventional linear models. They are consistent in parameters for most open-loop identification problems. They are parsimonious in parameters if the dominant pole(s) of the system are used in their development. The model parameters are easily estimated using the linear least square method. Moreover, the time delay estimation can be easily integrated in the model development. However, there are several problems that are not yet addressed. Some of the outstanding problems are: (i) Developing parsimonious OBF models when the dominant poles of the system are not known (ii) Obtaining a better estimate of time delay for second or higher order systems (iii) Including an explicit noise model in the framework of OBF model structures and determine the parameters and multi-step ahead predictions (iv) Closed-loop identification problems in this new OBF plus noise model frame work This study presents novel schemes that address the above problems. The first problem is addressed by formulating an iterative scheme where one or two of the dominant pole(s) of the system are estimated and used to develop parsimonious OBF models. A unified scheme is formulated where an OBF-deterministic model and an explicit AR or ARMA stochastic (noise) models are developed to address the second problem. The closed-loop identification problem is addressed by developing schemes based on the direct and indirect approaches using OBF based structures. For all the proposed OBF prediction model structures, the method for estimating the model parameters and multi-step ahead prediction are developed. All the proposed schemes are demonstrated with the help of simulation and real plant case studies. The accuracy of the developed OBF-based models is verified using appropriate validation procedures and residual analysis

    Hybrid Gauss Pseudospectral and Generalized Polynomial Chaos Algorithm to Solve Stochastic Optimal Control Problems

    Get PDF
    A numerical algorithm combining the Gauss Pseudospectral Method (GPM) with a Generalized Polynomial Chaos (gPC) method to solve nonlinear stochastic optimal control problems with constraint uncertainties is presented. The GPM and gPC have been shown to be spectrally accurate numerical methods for solving deterministic optimal control problems and stochastic differential equations, respectively. The gPC uses collocation nodes to sample the random space, which are then inserted into the differential equations and solved using standard solvers to generate a set of deterministic solutions used to characterize the distribution of the solution by constructing a polynomial representation of the output as a function of uncertain parameters. The proposed algorithm investigates using GPM optimization software in place of deterministic differential equation solvers traditionally used in the gPC, providing minimum cost deterministic solutions that meet path, control, and boundary constraints. A trajectory optimization problem is considered where the objectives are to find the path through a two-dimensional space that minimizes the probability a vehicle will be ’killed’ by lethal threats whose locations are uncertain and to characterize the effects those uncertainties have on the solution by estimating the statistical properties

    Design Issues for Generalized Linear Models: A Review

    Full text link
    Generalized linear models (GLMs) have been used quite effectively in the modeling of a mean response under nonstandard conditions, where discrete as well as continuous data distributions can be accommodated. The choice of design for a GLM is a very important task in the development and building of an adequate model. However, one major problem that handicaps the construction of a GLM design is its dependence on the unknown parameters of the fitted model. Several approaches have been proposed in the past 25 years to solve this problem. These approaches, however, have provided only partial solutions that apply in only some special cases, and the problem, in general, remains largely unresolved. The purpose of this article is to focus attention on the aforementioned dependence problem. We provide a survey of various existing techniques dealing with the dependence problem. This survey includes discussions concerning locally optimal designs, sequential designs, Bayesian designs and the quantile dispersion graph approach for comparing designs for GLMs.Comment: Published at http://dx.doi.org/10.1214/088342306000000105 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Actor-Critic Algorithms for Risk-Sensitive MDPs

    Get PDF
    In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in rewards in addition to maximizing a standard criterion. Variance-related risk measures are among the most common risk-sensitive criteria in finance and operations research. However, optimizing many such criteria is known to be a hard problem. In this paper, we consider both discounted and average reward Markov decision processes. For each formulation, we first define a measure of variability for a policy, which in turn gives us a set of risk-sensitive criteria to optimize. For each of these criteria, we derive a formula for computing its gradient. We then devise actor-critic algorithms for estimating the gradient and updating the policy parameters in the ascent direction. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in a traffic signal control application

    Optimal treatment allocations in space and time for on-line control of an emerging infectious disease

    Get PDF
    A key component in controlling the spread of an epidemic is deciding where, whenand to whom to apply an intervention.We develop a framework for using data to informthese decisionsin realtime.We formalize a treatment allocation strategy as a sequence of functions, oneper treatment period, that map up-to-date information on the spread of an infectious diseaseto a subset of locations where treatment should be allocated. An optimal allocation strategyoptimizes some cumulative outcome, e.g. the number of uninfected locations, the geographicfootprint of the disease or the cost of the epidemic. Estimation of an optimal allocation strategyfor an emerging infectious disease is challenging because spatial proximity induces interferencebetween locations, the number of possible allocations is exponential in the number oflocations, and because disease dynamics and intervention effectiveness are unknown at outbreak.We derive a Bayesian on-line estimator of the optimal allocation strategy that combinessimulation–optimization with Thompson sampling.The estimator proposed performs favourablyin simulation experiments. This work is motivated by and illustrated using data on the spread ofwhite nose syndrome, which is a highly fatal infectious disease devastating bat populations inNorth America

    Control Variates for Reversible MCMC Samplers

    Full text link
    A general methodology is introduced for the construction and effective application of control variates to estimation problems involving data from reversible MCMC samplers. We propose the use of a specific class of functions as control variates, and we introduce a new, consistent estimator for the values of the coefficients of the optimal linear combination of these functions. The form and proposed construction of the control variates is derived from our solution of the Poisson equation associated with a specific MCMC scenario. The new estimator, which can be applied to the same MCMC sample, is derived from a novel, finite-dimensional, explicit representation for the optimal coefficients. The resulting variance-reduction methodology is primarily applicable when the simulated data are generated by a conjugate random-scan Gibbs sampler. MCMC examples of Bayesian inference problems demonstrate that the corresponding reduction in the estimation variance is significant, and that in some cases it can be quite dramatic. Extensions of this methodology in several directions are given, including certain families of Metropolis-Hastings samplers and hybrid Metropolis-within-Gibbs algorithms. Corresponding simulation examples are presented illustrating the utility of the proposed methods. All methodological and asymptotic arguments are rigorously justified under easily verifiable and essentially minimal conditions.Comment: 44 pages; 6 figures; 5 table

    Methodology for Analyzing and Characterizing Error Generation in Presence of Autocorrelated Demands in Stochastic Inventory Models

    Get PDF
    Most techniques that describe and solve stochastic inventory problems rely upon the assumption of identically and independently distributed (IID) demands. Stochastic inventory formulations that fail to capture serially-correlated components in the demand lead to serious errors. This dissertation provides a robust method that approximates solutions to the stochastic inventory problem where the control review system is continuous, the demand contains autocorrelated components, and the lost sales case is considered. A simulation optimization technique based on simulated annealing (SA), pattern search (PS), and ranking and selection (R&S) is developed and used to generate near-optimal solutions. The proposed method accounts for the randomness and dependency of the demand as well as for the inherent constraints of the inventory model. The impact of serially-correlated demand is investigated for discrete and continuous dependent input models. For the discrete dependent model, the autocorrelated demand is assumed to behave as a discrete Markov-modulated chain (DMC), while a first-order autoregressive AR(1) process is assumed for describing the continuous demand. The effects of these demand patterns combined with structural cost variations on estimating both total costs and control policy parameters were examined. Results demonstrated that formulations that ignore the serially-correlated component performed worse than those that considered it. In this setting, the effect of holding cost and its interaction with penalty cost become stronger and more significant as the serially-correlated component increases. The growth rate in the error generated in total costs by formulations that ignore dependency components is significant and fits exponential models. To verify the effectiveness of the proposed simulation optimization method for finding the near-optimal inventory policy at different levels of autocorrelation factors, total costs, and stockout rates were estimated. The results provide additional evidence that serially-correlated components in the demand have a relevant impact on determining inventory control policies and estimating measurement of performance
    corecore