3 research outputs found

    On Separation Between Learning and Control in Partially Observed Markov Decision Processes

    Full text link
    Cyber-physical systems (CPS) encounter a large volume of data which is added to the system gradually in real time and not altogether in advance. As the volume of data increases, the domain of the control strategies also increases, and thus it becomes challenging to search for an optimal strategy. Even if an optimal control strategy is found, implementing such strategies with increasing domains is burdensome. To derive an optimal control strategy in CPS, we typically assume an ideal model of the system. Such model-based control approaches cannot effectively facilitate optimal solutions with performance guarantees due to the discrepancy between the model and the actual CPS. Alternatively, traditional supervised learning approaches cannot always facilitate robust solutions using data derived offline. Similarly, applying reinforcement learning approaches directly to the actual CPS might impose significant implications on safety and robust operation of the system. The goal of this chapter is to provide a theoretical framework that aims at separating the control and learning tasks which allows us to combine offline model-based control with online learning approaches, and thus circumvent the challenges in deriving optimal control strategies for CPS.Comment: 18 pages, 5 figures. arXiv admin note: text overlap with arXiv:2101.1099

    On Social Optimal Routing Under Selfish Learning

    No full text
    corecore