1,580 research outputs found

    First-Passage Time and Large-Deviation Analysis for Erasure Channels with Memory

    Full text link
    This article considers the performance of digital communication systems transmitting messages over finite-state erasure channels with memory. Information bits are protected from channel erasures using error-correcting codes; successful receptions of codewords are acknowledged at the source through instantaneous feedback. The primary focus of this research is on delay-sensitive applications, codes with finite block lengths and, necessarily, non-vanishing probabilities of decoding failure. The contribution of this article is twofold. A methodology to compute the distribution of the time required to empty a buffer is introduced. Based on this distribution, the mean hitting time to an empty queue and delay-violation probabilities for specific thresholds can be computed explicitly. The proposed techniques apply to situations where the transmit buffer contains a predetermined number of information bits at the onset of the data transfer. Furthermore, as additional performance criteria, large deviation principles are obtained for the empirical mean service time and the average packet-transmission time associated with the communication process. This rigorous framework yields a pragmatic methodology to select code rate and block length for the communication unit as functions of the service requirements. Examples motivated by practical systems are provided to further illustrate the applicability of these techniques.Comment: To appear in IEEE Transactions on Information Theor

    Non-Equilibrium Statistical Physics of Currents in Queuing Networks

    Get PDF
    We consider a stable open queuing network as a steady non-equilibrium system of interacting particles. The network is completely specified by its underlying graphical structure, type of interaction at each node, and the Markovian transition rates between nodes. For such systems, we ask the question ``What is the most likely way for large currents to accumulate over time in a network ?'', where time is large compared to the system correlation time scale. We identify two interesting regimes. In the first regime, in which the accumulation of currents over time exceeds the expected value by a small to moderate amount (moderate large deviation), we find that the large-deviation distribution of currents is universal (independent of the interaction details), and there is no long-time and averaged over time accumulation of particles (condensation) at any nodes. In the second regime, in which the accumulation of currents over time exceeds the expected value by a large amount (severe large deviation), we find that the large-deviation current distribution is sensitive to interaction details, and there is a long-time accumulation of particles (condensation) at some nodes. The transition between the two regimes can be described as a dynamical second order phase transition. We illustrate these ideas using the simple, yet non-trivial, example of a single node with feedback.Comment: 26 pages, 5 figure

    Large deviations analysis for the M/H2/n+MM/H_2/n + M queue in the Halfin-Whitt regime

    Full text link
    We consider the FCFS M/H2/n+MM/H_2/n + M queue in the Halfin-Whitt heavy traffic regime. It is known that the normalized sequence of steady-state queue length distributions is tight and converges weakly to a limiting random variable W. However, those works only describe W implicitly as the invariant measure of a complicated diffusion. Although it was proven by Gamarnik and Stolyar that the tail of W is sub-Gaussian, the actual value of limxx2log(P(W>x))\lim_{x \rightarrow \infty}x^{-2}\log(P(W >x)) was left open. In subsequent work, Dai and He conjectured an explicit form for this exponent, which was insensitive to the higher moments of the service distribution. We explicitly compute the true large deviations exponent for W when the abandonment rate is less than the minimum service rate, the first such result for non-Markovian queues with abandonments. Interestingly, our results resolve the conjecture of Dai and He in the negative. Our main approach is to extend the stochastic comparison framework of Gamarnik and Goldberg to the setting of abandonments, requiring several novel and non-trivial contributions. Our approach sheds light on several novel ways to think about multi-server queues with abandonments in the Halfin-Whitt regime, which should hold in considerable generality and provide new tools for analyzing these systems

    Conformance checking and performance improvement in scheduled processes: A queueing-network perspective

    No full text
    Service processes, for example in transportation, telecommunications or the health sector, are the backbone of today's economies. Conceptual models of service processes enable operational analysis that supports, e.g., resource provisioning or delay prediction. In the presence of event logs containing recorded traces of process execution, such operational models can be mined automatically.In this work, we target the analysis of resource-driven, scheduled processes based on event logs. We focus on processes for which there exists a pre-defined assignment of activity instances to resources that execute activities. Specifically, we approach the questions of conformance checking (how to assess the conformance of the schedule and the actual process execution) and performance improvement (how to improve the operational process performance). The first question is addressed based on a queueing network for both the schedule and the actual process execution. Based on these models, we detect operational deviations and then apply statistical inference and similarity measures to validate the scheduling assumptions, thereby identifying root-causes for these deviations. These results are the starting point for our technique to improve the operational performance. It suggests adaptations of the scheduling policy of the service process to decrease the tardiness (non-punctuality) and lower the flow time. We demonstrate the value of our approach based on a real-world dataset comprising clinical pathways of an outpatient clinic that have been recorded by a real-time location system (RTLS). Our results indicate that the presented technique enables localization of operational bottlenecks along with their root-causes, while our improvement technique yields a decrease in median tardiness and flow time by more than 20%

    Queuing delays in randomized load balanced networks

    Get PDF
    Valiant’s concept of Randomized Load Balancing (RLB), also promoted under the name ‘two-phase routing’, has previously been shown to provide a cost-effective way of implementing overlay networks that are robust to dynamically changing demand patterns. RLB is accomplished in two steps; in the first step, traffic is randomly distributed across the network, and in the second step traffic is routed to the final destination. One of the benefits of RLB is that packets experience only a single stage of routing, thus reducing queueing delays associated with multi-hop architectures. In this paper, we study the queuing performance of RLB, both through analytical methods and packet-level simulations using ns2 on three representative carrier networks. We show that purely random traffic splitting in the randomization step of RLB leads to higher queuing delays than pseudo-random splitting using, e.g., a round-robin schedule. Furthermore, we show that, for pseudo-random scheduling, queuing delays depend significantly on the degree of uniformity of the offered demand patterns, with uniform demand matrices representing a provably worst-case scenario. These results are independent of whether RLB employs priority mechanisms between traffic from step one over step two. A comparison with multi-hop shortest-path routing reveals that RLB eliminates the occurrence of demand-specific hot spots in the network

    Control of the multiclass G/G/1G/G/1 queue in the moderate deviation regime

    Full text link
    A multi-class single-server system with general service time distributions is studied in a moderate deviation heavy traffic regime. In the scaling limit, an optimal control problem associated with the model is shown to be governed by a differential game that can be explicitly solved. While the characterization of the limit by a differential game is akin to results at the large deviation scale, the analysis of the problem is closely related to the much studied area of control in heavy traffic at the diffusion scale.Comment: Published in at http://dx.doi.org/10.1214/13-AAP971 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Asymptotic optimality of maximum pressure policies in stochastic processing networks

    Full text link
    We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a framework of Bramson [Queueing Systems Theory Appl. 30 (1998) 89--148] and Williams [Queueing Systems Theory Appl. 30 (1998b) 5--25] from the multiclass queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks.Comment: Published in at http://dx.doi.org/10.1214/08-AAP522 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore