32 research outputs found

    Convergence of trajectories and optimal buffer sizing for AIMD congestion control

    Get PDF
    We study the interaction between the AIMD (Additive Increase Multiplicative Decrease) multi-socket congestion control and a bottleneck router with Drop Tail buffer. We consider the problem in the framework of deterministic hybrid models. First, we show that trajectories always converge to limiting cycles. We characterize the cycles. Necessary and sufficient conditions for the absence of multiple jumps in the same cycle are obtained. Then, we propose an analytical framework for the optimal choice of the router buffer size. We formulate this problem as a multi-criteria optimization problem, in which the Lagrange function corresponds to a linear combination of the average goodput and the average delay in the queue. Our analytical results are confirmed by simulations performed with MATLAB Simulink

    Convergence of trajectories and optimal buffer sizing for MIMD congestion control

    Get PDF
    We study the interaction between the MIMD (Multiplicative Increase Multiplicative Decrease) congestion control and a bottleneck router with Drop Tail buffer. We consider the problem in the framework of deterministic hybrid models. We study conditions under which the system trajectories converge to limiting cycles with a single jump. Following that, we consider the problem of the optimal buffer sizing in the framework of multi-criteria optimization in which the Lagrange function corresponds to a linear combination of the average throughput and the average delay in the queue. As case studies, we consider the Slow Start phase of TCP New Reno and Scalable TCP for high speed networks. © 2009 Elsevier B.V. All rights reserved

    Sufficiency of Deterministic Policies for Atomless Discounted and Uniformly Absorbing MDPs with Multiple Criteria

    Get PDF
    This paper studies Markov decision processes (MDPs) with atomless initial state distributions and atomless transition probabilities. Such MDPs are called atomless. The initial state distribution is considered to be fixed. We show that for discounted MDPs with bounded one-step reward vector-functions, for each policy there exists a deterministic (that is, nonrandomized and stationary) policy with the same performance vector. This fact is proved in the paper for a more general class of uniformly absorbing MDPs with expected total rewards, and then it is extended under certain assumptions to MDPs with unbounded rewards. For problems with multiple criteria and constraints, the results of this paper imply that for atomless MDPs studied in this paper it is sufficient to consider only deterministic policies, while without the atomless assumption it is well-known that randomized policies can outperform deterministic ones. We also provide an example of an MDP demonstrating that if a vector measure is defined on a standard Borel space, then Lyapunov's convexity theorem is a special case of the described results

    Dynamic programming in constrained Markov decision

    No full text
    We consider a discounted Markov Decision Process (MDP) supplemented with the requirement that another discounted loss must not exceed a specified value, almost surely. We show that he problem can be reformulated as a standard MDP and solved using the Dynamic Programming approach. An example on a controlled queue is presented. In the last section, we briefly reinforce the connection of the Dynamic Programming approach to another close problem statement and present the corresponding example. Several other types of constraints are discussed, as well

    Optimal control of random sequences in problems with constraints

    No full text

    Multiple objective nonatomic Markov decision processes with total reward criteria

    Get PDF
    We consider a Markov decision process with an uncountable state space and multiple rewards. For each policy, its performance is evaluated by a vector of total expected rewards. Under the standard continuity assumptions and the additional assumption that all initial and transition probabilities are nonatomic, we prove that the set of performance vectors for all policies is equal to the set of performance vectors for (nonrandomized) Markov policies. This result implies the existence of optimal (nonrandomized) Markov policies for nonatomic constrained Markov decision processes with total rewards. We provide two examples of applications of our results to constrained multiple objective problems in inventory control and finance
    corecore