1,556 research outputs found

    Distributive Power Control Algorithm for Multicarrier Interference Network over Time-Varying Fading Channels - Tracking Performance Analysis and Optimization

    Full text link
    Distributed power control over interference limited network has received an increasing intensity of interest over the past few years. Distributed solutions (like the iterative water-filling, gradient projection, etc.) have been intensively investigated under \emph{quasi-static} channels. However, as such distributed solutions involve iterative updating and explicit message passing, it is unrealistic to assume that the wireless channel remains unchanged during the iterations. Unfortunately, the behavior of those distributed solutions under \emph{time-varying} channels is in general unknown. In this paper, we shall investigate the distributed scaled gradient projection algorithm (DSGPA) in a KK pairs multicarrier interference network under a finite-state Markov channel (FSMC) model. We shall analyze the \emph{convergence property} as well as \emph{tracking performance} of the proposed DSGPA. Our analysis shows that the proposed DSGPA converges to a limit region rather than a single point under the FSMC model. We also show that the order of growth of the tracking errors is given by \mathcal{O}\(1 \big/ \bar{N}\), where Nˉ\bar{N} is the \emph{average sojourn time} of the FSMC. Based on the analysis, we shall derive the \emph{tracking error optimal scaling matrices} via Markov decision process modeling. We shall show that the tracking error optimal scaling matrices can be implemented distributively at each transmitter. The numerical results show the superior performance of the proposed DSGPA over three baseline schemes, such as the gradient projection algorithm with a constant stepsize.Comment: To Appear on the IEEE Transaction on Signal Processin

    A Hybrid-Adaptive Dynamic Programming Approach for the Model-Free Control of Nonlinear Switched Systems

    Full text link

    Optimal Tracking in Switched Systems With Free Final Time and Fixed Mode Sequence Using Approximate Dynamic Programming

    Get PDF
    Optimal tracking in switched systems with fixed mode sequence and free final time is studied in this article. In the optimal control problem formulation, the switching times and the final time are treated as parameters. For solving the optimal control problem, approximate dynamic programming (ADP) is used. The ADP solution uses an inner loop to converge to the optimal policy at each time step. In order to decrease the computational burden of the solution, a new method is introduced, which uses evolving suboptimal policies (not the optimal policies), to learn the optimal solution. The effectiveness of the proposed solutions is evaluated through numerical simulations

    Formal Controller Synthesis for Continuous-Space MDPs via Model-Free Reinforcement Learning

    Full text link
    A novel reinforcement learning scheme to synthesize policies for continuous-space Markov decision processes (MDPs) is proposed. This scheme enables one to apply model-free, off-the-shelf reinforcement learning algorithms for finite MDPs to compute optimal strategies for the corresponding continuous-space MDPs without explicitly constructing the finite-state abstraction. The proposed approach is based on abstracting the system with a finite MDP (without constructing it explicitly) with unknown transition probabilities, synthesizing strategies over the abstract MDP, and then mapping the results back over the concrete continuous-space MDP with approximate optimality guarantees. The properties of interest for the system belong to a fragment of linear temporal logic, known as syntactically co-safe linear temporal logic (scLTL), and the synthesis requirement is to maximize the probability of satisfaction within a given bounded time horizon. A key contribution of the paper is to leverage the classical convergence results for reinforcement learning on finite MDPs and provide control strategies maximizing the probability of satisfaction over unknown, continuous-space MDPs while providing probabilistic closeness guarantees. Automata-based reward functions are often sparse; we present a novel potential-based reward shaping technique to produce dense rewards to speed up learning. The effectiveness of the proposed approach is demonstrated by applying it to three physical benchmarks concerning the regulation of a room's temperature, control of a road traffic cell, and of a 7-dimensional nonlinear model of a BMW 320i car.Comment: This work is accepted at the 11th ACM/IEEE Conference on Cyber-Physical Systems (ICCPS

    Optimization-based Estimation and Control Algorithms for Quadcopter Applications

    Get PDF
    • …
    corecore