553 research outputs found

    Exact performance analysis of a single-wavelength optical buffer with correlated inter-arrival times

    Get PDF
    Providing a photonic alternative to the current electronic switching in the backbone, optical packet switching (OPS) and optical bursts witching (OBS) require optical buffering. Optical buffering exploits delays in long optical fibers; an optical buffer is implemented by routing packets through a set of fiber delay lines (FDLs). Previous studies pointed out that, in comparison with electronic buffers, optical buffering suffers from an additional performance degradation. This contribution builds on this observation by studying optical buffer performance under more general traffic assumptions. Features of the optical buffer model under consideration include a Markovian arrival process, general burst sizes and a finite set of fiber delay lines of arbitrary length. Our algorithmic approach yields instant analytic results for important performance measures such as the burst loss ratio and the mean delay

    Mean field convergence of a model of multiple TCP connections through a buffer implementing RED

    Full text link
    RED (Random Early Detection) has been suggested when multiple TCP sessions are multiplexed through a bottleneck buffer. The idea is to detect congestion before the buffer overflows by dropping or marking packets with a probability that increases with the queue length. The objectives are reduced packet loss, higher throughput, reduced delay and reduced delay variation achieved through an equitable distribution of packet loss and reduced synchronization. Baccelli, McDonald and Reynier [Performance Evaluation 11 (2002) 77--97] have proposed a fluid model for multiple TCP connections in the congestion avoidance regime multiplexed through a bottleneck buffer implementing RED. The window sizes of each TCP session evolve like independent dynamical systems coupled by the queue length at the buffer. The key idea in [Performance Evaluation 11 (2002) 77--97] is to consider the histogram of window sizes as a random measure coupled with the queue. Here we prove the conjecture made in [Performance Evaluation 11 (2002) 77--97] that, as the number of connections tends to infinity, this system converges to a deterministic mean-field limit comprising the window size density coupled with a deterministic queue.Comment: Published at http://dx.doi.org/10.1214/105051605000000700 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the Performance of Short Block Codes over Finite-State Channels in the Rare-Transition Regime

    Full text link
    As the mobile application landscape expands, wireless networks are tasked with supporting different connection profiles, including real-time traffic and delay-sensitive communications. Among many ensuing engineering challenges is the need to better understand the fundamental limits of forward error correction in non-asymptotic regimes. This article characterizes the performance of random block codes over finite-state channels and evaluates their queueing performance under maximum-likelihood decoding. In particular, classical results from information theory are revisited in the context of channels with rare transitions, and bounds on the probabilities of decoding failure are derived for random codes. This creates an analysis framework where channel dependencies within and across codewords are preserved. Such results are subsequently integrated into a queueing problem formulation. For instance, it is shown that, for random coding on the Gilbert-Elliott channel, the performance analysis based on upper bounds on error probability provides very good estimates of system performance and optimum code parameters. Overall, this study offers new insights about the impact of channel correlation on the performance of delay-aware, point-to-point communication links. It also provides novel guidelines on how to select code rates and block lengths for real-time traffic over wireless communication infrastructures

    Performance analysis of networks on chips

    Get PDF
    Modules on a chip (such as processors and memories) are traditionally connected through a single link, called a bus. As chips become more complex and the number of modules on a chip increases, this connection method becomes inefficient because the bus can only be used by one module at a time. Networks on chips are an emerging technology for the connection of on-chip modules. In networks on chips, switches are used to transmit data from one module to another, which entails that multiple links can be used simultaneously so that communication is more efficient. Switches consist of a number of input ports to which data arrives and output ports from which data leaves. If data at multiple input ports has to be transmitted to the same output port, only one input port may actually transmit its data, which may lead to congestion. Queueing theory deals with the analysis of congestion phenomena caused by competition for service facilities with scarce resources. Such phenomena occur, for example, in traffic intersections, manufacturing systems, and communication networks like networks on chips. These congestion phenomena are typically analysed using stochastic models, which capture the uncertain and unpredictable nature of processes leading to congestion (such as irregular car arrivals to a traffic intersection). Stochastic models are useful tools for the analysis of networks on chips as well, due to the complexity of data traffic on these networks. In this thesis, we therefore study queueing models aimed at networks on chips. The thesis is centred around two key models: A model of a switch in isolation, the so-called single-switch model, and a model of a network of switches where all traffic has the same destination, the so-called network of polling stations. For both models we are interested in the throughput (the amount of data transmitted per time unit) and the mean delay (the time it takes data to travel across the network). Single-switch models are often studied under the assumption that the number of ports tends to infinity and that traffic is uniform (i.e., on average equally many packets arrive to all buffers, and all possible destinations are equally likely). In networks on chips, however, the number of buffers is typically small. We introduce a new approximation specifically aimed at small switches with (memoryless) Bernoulli arrivals. We show that, for such switches, this approximation is more accurate than currently known approximations. As traffic in networks on chips is usually non-uniform, we also extend our approximation to non-uniform switches. The key difference between uniform and nonuniform switches is that in non-uniform switches, all queues have a different maximum throughput. We obtain a very accurate approximation of this throughput, which allows us to extend the mean delay approximation. The extended approximation is derived for Bernoulli arrivals and correlated arrival processes. Its accuracy is verified through a comparison with simulation results. The second key model is that of concentrating tree networks of polling stations (polling stations are essentially switches where all traffic has the same output port as destination). Single polling stations have been studied extensively in literature, but only few attempts have been made to analyse networks of polling stations. We establish a reduction theorem that states that networks of polling stations can be reduced to single polling stations while preserving some information on mean waiting times. This reduction theorem holds under the assumption that the last node of the network uses a so-called HoL-based service discipline, which means that the choice to transmit data from a certain buffer may only depend on which buffers are empty, but not on the amount of data in the buffers. The reduction theorem is a key tool for the analysis of networks of polling stations. In addition to this, mean waiting times in single polling stations have to be calculated, either exactly or approximately. To this end, known results can be used, but we also devise a new single-station approximation that can be used for a large subclass of HoL-based service disciplines. Finally, networks on chips typically implement flow control, which is a mechanism that limits the amount of data in the network from one source. We analyse the division of throughput over several sources in a network of polling stations with flow control. Our results indicate that the throughput in such a network is determined by an interaction between buffer sizes, flow control limits, and service disciplines. This interaction is studied in more detail by means of a numerical analysis

    Feedback-based scheduling for load-balanced two-stage switches

    Get PDF
    A framework for designing feedback-based scheduling algorithms is proposed for elegantly solving the notorious packet missequencing problem of a load-balanced switch. Unlike existing approaches, we show that the efforts made in load balancing and keeping packets in order can complement each other. Specifically, at each middle-stage port between the two switch fabrics of a load-balanced switch, only a single-packet buffer for each virtual output queueing (VOQ) is required. Although packets belonging to the same flow pass through different middle-stage VOQs, the delays they experience at different middle-stage ports will be identical. This is made possible by properly selecting and coordinating the two sequences of switch configurations to form a joint sequence with both staggered symmetry property and in-order packet delivery property. Based on the staggered symmetry property, an efficient feedback mechanism is designed to allow the right middle-stage port occupancy vector to be delivered to the right input port at the right time. As a result, the performance of load balancing as well as the switch throughput is significantly improved. We further extend this feedback mechanism to support the multicabinet implementation of a load-balanced switch, where the propagation delay between switch linecards and switch fabrics is nonnegligible. As compared to the existing load-balanced switch architectures and scheduling algorithms, our solutions impose a modest requirement on switch hardware, but consistently yield better delay-throughput performance. Last but not least, some extensions and refinements are made to address the scalability, implementation, and fairness issues of our solutions. © 2009 IEEE.published_or_final_versio

    A Network Calculus Approach for the Analysis of Multi-Hop Fading Channels

    Full text link
    A fundamental problem in the delay and backlog analysis across multi-hop paths in wireless networks is how to account for the random properties of the wireless channel. Since the usual statistical models for radio signals in a propagation environment do not lend themselves easily to a description of the available service rate on a wireless link, the performance analysis of wireless networks has resorted to higher-layer abstractions, e.g., using Markov chain models. In this work, we propose a network calculus that can incorporate common statistical models of fading channels and obtain statistical bounds on delay and backlog across multiple nodes. We conduct the analysis in a transfer domain, which we refer to as the `SNR domain', where the service process at a link is characterized by the instantaneous signal-to-noise ratio at the receiver. We discover that, in the transfer domain, the network model is governed by a dioid algebra, which we refer to as (min,x)-algebra. Using this algebra we derive the desired delay and backlog bounds. An application of the analysis is demonstrated for a simple multi-hop network with Rayleigh fading channels and for a network with cross traffic.Comment: 26 page
    • …
    corecore