1,416 research outputs found

    Solvability in Discrete, Nonstationary, Infinite Horizon Optimization

    Full text link
    For several time-staged operations management problems, the optimal immediate decision is dependent on the choice of problem horizon. When that horizon is very long or indefinite, an appropriate modeling technique is infinite horizon optimization. For problems that have stationary data over time, optimizing system performance over an infinite horizon is generally no more difficult than optimizing over a finite horizon. However, restricting problem data to be stationary can render the models unrealistic, failing to include nonstationary aspects of the real world. The primary difficulty in nonstationary, infinite horizon optimization is that the problem to solve can never be known in its entirety. Thus, solution techniques must rely upon increasingly longer finite horizon problems. Ideally, the optimal immediate decisions to these finite horizon problems converge to an infinite horizon optimum. When finite detection of that optimal decision is possible, we call the underlying infinite horizon problem well-posed. The literature on nonstationary, infinite horizon optimization has generally relied upon either uniqueness of the optimal immediate decision or monotonicity of that decision as a function of horizon length. In this thesis, we require neither of these, instead developing a more general structural condition called coalescence that is equivalent to well-posedness. Chapters 2-4 study infinite horizon variants of three deterministic optimization applications: concave cost production planning, single machine replacement, and capacitated inventory planning. For each problem, we show that coalescence is equivalent to well-posedness. We also give a solution procedure for each application that will uncover an infinite horizon optimal immediate decision for any well-posed problem. In Chapter 5, we generalize the results of these applications to a generic classes of optimization problems expressible as dynamic programs. Under two different sets of assumptions concerning the finiteness of and reachability between states, we show that coalescence and well-posedness are equivalent. We also give solution procedures that solve any well-posed problem under each set of assumptions. Finally, in Chapter 6, we introduce a stochastic application: the infinite horizon asset selling problem, and again show that coalescence and well-posedness are equivalent and give a solution procedure to solve any such well-posed problem.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60810/1/tlortz_1.pd

    Optimal Output Regulation for Square, Over-Actuated and Under-Actuated Linear Systems

    Full text link
    This paper considers two different problems in trajectory tracking control for linear systems. First, if the control is not unique which is most input energy efficient. Second, if exact tracking is infeasible which control performs most accurately. These are typical challenges for over-actuated systems and for under-actuated systems, respectively. We formulate both goals as optimal output regulation problems. Then we contribute two new sets of regulator equations to output regulation theory that provide the desired solutions. A thorough study indicates solvability and uniqueness under weak assumptions. E.g., we can always determine the solution of the classical regulator equations that is most input energy efficient. This is of great value if there are infinitely many solutions. We derive our results by a linear quadratic tracking approach and establish a useful link to output regulation theory.Comment: 8 pages, 0 figures, final version to appear in IEEE Transactions on Automatic Contro

    Optimal control of linear, stochastic systems with state and input constraints

    Get PDF
    In this paper we extend the work presented in our previous papers (2001) where we considered optimal control of a linear, discrete time system subject to input constraints and stochastic disturbances. Here we basically look at the same problem but we additionally consider state constraints. We discuss several approaches for incorporating state constraints in a stochastic optimal control problem. We consider in particular a soft-constraint on the state constraints where constraint violation is punished by a hefty penalty in the cost function. Because of the stochastic nature of the problem, the penalty on the state constraint violation can not be made arbitrary high. We derive a condition on the growth of the state violation cost that has to be satisfied for the optimization problem to be solvable. This condition gives a link between the problem that we consider and the well known H∞H_\infty control problem

    Controllability Metrics on Networks with Linear Decision Process-type Interactions and Multiplicative Noise

    Full text link
    This paper aims at the study of controllability properties and induced controllability metrics on complex networks governed by a class of (discrete time) linear decision processes with mul-tiplicative noise. The dynamics are given by a couple consisting of a Markov trend and a linear decision process for which both the "deterministic" and the noise components rely on trend-dependent matrices. We discuss approximate, approximate null and exact null-controllability. Several examples are given to illustrate the links between these concepts and to compare our results with their continuous-time counterpart (given in [16]). We introduce a class of backward stochastic Riccati difference schemes (BSRDS) and study their solvability for particular frameworks. These BSRDS allow one to introduce Gramian-like controllability metrics. As application of these metrics, we propose a minimal intervention-targeted reduction in the study of gene networks
    • …
    corecore