16,607 research outputs found
Optimal Control of Energy Efficient Buildings
The building sector consumes a large part of the energy used in the United States and is responsible for nearly 40% of greenhouse gas emissions. Therefore, it is economically and environmentally important to reduce the building energy consumption to realize massive energy savings. Commercial buildings are complex, multi-physics, and highly stochastic dynamic systems. Recent work has focused on integrating modern modeling, simulation, and control techniques to solving this challenging problem. The overall focus of this thesis is directed toward designing an energy efficient building by controlling room temperature. One approach is based on a distributed parameter model represented by a three dimensional (3D) heat equation in a room with heater/cooler located at ceiling. The finite element method is implemented as part of a novel solution to this problem. A reduced order model of only few states is derived using Proper Orthogonal Decomposition (POD). A Linear Quadratic Regulator (LQR) is computed based on the reduced model, and applied to the full order model to control room temperature. Also, a receding horizon constrained linear quadratic Gaussian (LQG) controller is developed by minimizing energy cost of heating and cooling while satisfying hard and probabilistic temperature constraints. A stochastic receding horizon controller (RHC) is employed to solve the optimization problem with the so-called chance constraints governed by probability temperature levels. Furthermore, a constrained stochastic linear quadratic control (SLQC) approach was developed for such purposes. The cost function to be minimized is quadratic, and two different cases are considered. The first case assumes the disturbance is Gaussian and the problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. The second case assumes the disturbance is norm-bounded with distribution unknown and the problem is formulated as a min-max problem. By using SLQC, both problems are reduced to semidefinite optimization problems, where the optimal control may be computed efficiently. Later, some discussions on solving more requirements by SLQC are provided. Simulation and numerical results are given to demonstrate the validity of the proposed techniques shown in this thesis
Dynamic equilibrium in games with randomly arriving players
This note follows our previous works on games with randomly arriving players [3] and [5]. Contrary to these two articles, here we seek a dynamic equilibrium, using the tools of piecewise deterministic control systems The resulting discrete Isaacs equation obtained is rather involved. As usual, it yields an explicit algorithm in the finite horizon, linear-quadratic case via a kind of discrete Riccati equation. The infinite horizon problem is briefly considered. It seems to be manageable only if one limits the number of players present in the game. In that case, the linear quadratic problem seems solvable via essentially the same algorithm, although we have no convergence proof, but only very convincing numerical evidence. We extend the solution to more general entry processes, and more importantly , to cases where the players may leave the game, investigating several stochastic exit mechanisms. We then consider the continuous time case, with a Poisson arrival process. While the general Isaacs equation is as involved as in the discrete time case, the linear quadratic case is simpler, and, provided again that we bound the maximum number of players allowed in the game, it yields an explicit algorithm with a convergence proof to the solution of the infinite horizon case, subject to a condition reminiscent of that found in [20]. As in the discrete time case, we examine the case where players may leave the game, investigating several possible stochastic exit mechanisms. MSC: 91A25, 91A06, 91A20, 91A23, 91A50, 91A60, 49N10, 93E03. Foreword This report is a version of the article [2] where players minimize instead of maximizing, and the linear-quadratic examples are somewhat different.On détermine les stratégies d'équilibre dans un jeu dynamique où des joueurs identiques arrivent de façon aléatoire, comme, par exemple, des congénères arrivant sur une même ressource. On considère aussi divers mécanismes de sortie aléatoire. On obtient des théorèmes d'existence et des algorithmes de calcul, plus explicites dans le cas particulier linéaire quadratique. Toute l'étude est conduite en horizon fini et en horizon infini, et en temps discret et en temps continu.Ce rapport est une version du working paper CRESE des mêmes auteurs (en économie mathématique), référence [2], mais où les joueurs minimisent au lieu de maximiser, et les exemples linéaires quadratiques sont un peu différents
Transformation Method for Solving Hamilton-Jacobi-Bellman Equation for Constrained Dynamic Stochastic Optimal Allocation Problem
In this paper we propose and analyze a method based on the Riccati
transformation for solving the evolutionary Hamilton-Jacobi-Bellman equation
arising from the stochastic dynamic optimal allocation problem. We show how the
fully nonlinear Hamilton-Jacobi-Bellman equation can be transformed into a
quasi-linear parabolic equation whose diffusion function is obtained as the
value function of certain parametric convex optimization problem. Although the
diffusion function need not be sufficiently smooth, we are able to prove
existence, uniqueness and derive useful bounds of classical H\"older smooth
solutions. We furthermore construct a fully implicit iterative numerical scheme
based on finite volume approximation of the governing equation. A numerical
solution is compared to a semi-explicit traveling wave solution by means of the
convergence ratio of the method. We compute optimal strategies for a portfolio
investment problem motivated by the German DAX 30 Index as an example of
application of the method
Stochastic Model Predictive Control with Discounted Probabilistic Constraints
This paper considers linear discrete-time systems with additive disturbances,
and designs a Model Predictive Control (MPC) law to minimise a quadratic cost
function subject to a chance constraint. The chance constraint is defined as a
discounted sum of violation probabilities on an infinite horizon. By penalising
violation probabilities close to the initial time and ignoring violation
probabilities in the far future, this form of constraint enables the
feasibility of the online optimisation to be guaranteed without an assumption
of boundedness of the disturbance. A computationally convenient MPC
optimisation problem is formulated using Chebyshev's inequality and we
introduce an online constraint-tightening technique to ensure recursive
feasibility based on knowledge of a suboptimal solution. The closed loop system
is guaranteed to satisfy the chance constraint and a quadratic stability
condition.Comment: 6 pages, Conference Proceeding
- …