3,411 research outputs found

    Active Queue Management for Fair Resource Allocation in Wireless Networks

    Get PDF
    This paper investigates the interaction between end-to-end flow control and MAC-layer scheduling on wireless links. We consider a wireless network with multiple users receiving information from a common access point; each user suffers fading, and a scheduler allocates the channel based on channel quality,but subject to fairness and latency considerations. We show that the fairness property of the scheduler is compromised by the transport layer flow control of TCP New Reno. We provide a receiver-side control algorithm, CLAMP, that remedies this situation. CLAMP works at a receiver to control a TCP sender by setting the TCP receiver's advertised window limit, and this allows the scheduler to allocate bandwidth fairly between the users

    Optimal queue-size scaling in switched networks

    Full text link
    We consider a switched (queuing) network in which there are constraints on which queues may be served simultaneously; such networks have been used to effectively model input-queued switches and wireless networks. The scheduling policy for such a network specifies which queues to serve at any point in time, based on the current state or past history of the system. In the main result of this paper, we provide a new class of online scheduling policies that achieve optimal queue-size scaling for a class of switched networks including input-queued switches. In particular, it establishes the validity of a conjecture (documented in Shah, Tsitsiklis and Zhong [Queueing Syst. 68 (2011) 375-384]) about optimal queue-size scaling for input-queued switches.Comment: Published in at http://dx.doi.org/10.1214/13-AAP970 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Coded Computation Against Processing Delays for Virtualized Cloud-Based Channel Decoding

    Get PDF
    The uplink of a cloud radio access network architecture is studied in which decoding at the cloud takes place via network function virtualization on commercial off-the-shelf servers. In order to mitigate the impact of straggling decoders in this platform, a novel coding strategy is proposed, whereby the cloud re-encodes the received frames via a linear code before distributing them to the decoding processors. Transmission of a single frame is considered first, and upper bounds on the resulting frame unavailability probability as a function of the decoding latency are derived by assuming a binary symmetric channel for uplink communications. Then, the analysis is extended to account for random frame arrival times. In this case, the trade-off between average decoding latency and the frame error rate is studied for two different queuing policies, whereby the servers carry out per-frame decoding or continuous decoding, respectively. Numerical examples demonstrate that the bounds are useful tools for code design and that coding is instrumental in obtaining a desirable compromise between decoding latency and reliability.Comment: 11 pages and 12 figures, Submitte

    Analysis of Multiple Flows using Different High Speed TCP protocols on a General Network

    Full text link
    We develop analytical tools for performance analysis of multiple TCP flows (which could be using TCP CUBIC, TCP Compound, TCP New Reno) passing through a multi-hop network. We first compute average window size for a single TCP connection (using CUBIC or Compound TCP) under random losses. We then consider two techniques to compute steady state throughput for different TCP flows in a multi-hop network. In the first technique, we approximate the queues as M/G/1 queues. In the second technique, we use an optimization program whose solution approximates the steady state throughput of the different flows. Our results match well with ns2 simulations.Comment: Submitted to Performance Evaluatio

    Order batching in multi-server pick-and-sort warehouses.

    Get PDF
    In many warehouses, customer orders are batched to profit from a reduction in the order picking effort. This reduction has to be offset against an increase in sorting effort. This paper studies the impact of the order batching policy on average customer order throughput time, in warehouses where the picking and sorting functions are executed separately by either a single operator or multiple parallel operators. We present a throughput time estimation model based on Whitt's queuing network approach, assuming that the number of order lines per customer order follows a discrete probability distribution and that the warehouse uses a random storage strategy. We show that the model is adequate in approximating the optimal pick batch size, minimizing average customer order throughput time. Next, we use the model to explore the different factors influencing optimal batch size, the optimal allocation of workers to picking and sorting, and the impact of different order picking strategies such as sort-while-pick (SWP) versus pick-and-sort (PAS)Order batching; Order picking and sorting; Queueing; Warehousing;

    Job Selection in a Network of Autonomous UAVs for Delivery of Goods

    Get PDF
    This article analyzes two classes of job selection policies that control how a network of autonomous aerial vehicles delivers goods from depots to customers. Customer requests (jobs) occur according to a spatio-temporal stochastic process not known by the system. If job selection uses a policy in which the first job (FJ) is served first, the system may collapse to instability by removing just one vehicle. Policies that serve the nearest job (NJ) first show such threshold behavior only in some settings and can be implemented in a distributed manner. The timing of job selection has significant impact on delivery time and stability for NJ while it has no impact for FJ. Based on these findings we introduce a methodological approach for decision-making support to set up and operate such a system, taking into account the trade-off between monetary cost and service quality. In particular, we compute a lower bound for the infrastructure expenditure required to achieve a certain expected delivery time. The approach includes three time horizons: long-term decisions on the number of depots to deploy in the service area, mid-term decisions on the number of vehicles to use, and short-term decisions on the policy to operate the vehicles

    Infogame: Final report

    Get PDF
    Management Information Systems;Management Games;management information systems
    corecore