CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
Average Optimal Stationary Policies: Convexity And Convergence Conditions In Linear Stochastic Control Systems
Authors
Do Val J.B.R.
Vargas A.N.
Publication date
26 November 2015
Publisher
'Institute of Electrical and Electronics Engineers (IEEE)'
Doi
Cite
Abstract
This paper provides a set of conditions for the existence of an optimal stationary policy in the long-run average cost control problem of linear stochastic systems. The main conditions are based on convexity of the cost by stage and convergence of trajectories. The discrete-time system is assumed to be linear with respect to the state but the controls take an abstract state-feedback structure, possibly a nonlinear one. An application is considered to illustrate the derived theory. ©2009 IEEE.33883393Bertsekas, D.P., Shreve, S.E., (1978) Stochastic Optimal Control: The Discrete Time Case, , Academic PressHernández-Lerma, O., Lasserre, J.B., (1996) Discrete-Time Markov Control Processes: Basic Optimality Criteria, , Springer-Verlag, New YorkAverage cost optimal policies for Markov control processes with Borel state space and unbounded costs (1990) Systems Control Lett., 15, pp. 349-356Arapostathis, A., Borkar, V.S., Fernández-Gaucherand, E., Ghosh, M.K., Marcus, S.I., Discrete-time controlled Markov processes with average cost criterion: A survey (1993) SIAM J. Control Optim., 31 (2), pp. 282-344Vargas, A.N., Do Val, J.B.R., On the existence of stationary optimal policies for the average-cost control problem of linear systems with abstract state-feedback Proc. 47th IEEE Conf. on Decision and Control, Cancun, Mexico, 2008, pp. 3682-3687A controllability condition for the existence of average optimal stationary policies of linear stochastic systems European Control Conference - ECC2009Anderson, B.D.O., Moore, J.B., (1979) Optimal Filtering, , Prentice-Hall, Englewood Cliffs, N.JBartle, R.G., (1964) The Elements of Real Analysis, , John Wiley & Sons, IncLewis, F.L., Syrmos, V.L., (1995) Optimal Control, , Wiley- Interscience, 2 edDavis, M.H.A., Vinter, R.B., (1985) Stochastic Modelling and Control, , Chapman and Hall, LondonKubrusly, C.S., On discrete stochastic bilinear systems stability (1986) J. Math. Anal. Appl., 113, pp. 36-58Knopp, K., (1956) Infinite Sequences and Series, , New York: Dover PublicationsWirth, F., Asymptotic behavior of the value functions of discrete-time discounted optimal control (2001) J. Optim. Theory Appl., 110 (1), pp. 183-210Rudin, W., (1987) Real and Complex Analysis, , McGraw-Hill Publishing Co.3rd editionCesari, L., (1983) Optimization-Theory and Applications. Problems with Ordinary Differential Equations, 17. , Springer Verlag. Applications of MathematicsGoldberg, R.R., (1964) Methods of Real Analysis, , Blaisdell Publishing Compan
Similar works
Full text
Available Versions
Repositorio da Producao Cientifica e Intelectual da Unicamp
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:repositorio.unicamp.br:REP...
Last time updated on 10/04/2020