5,046 research outputs found

    Evaluating Error when Estimating the Loss Probability in a Packet Buffer

    Get PDF
    PhDIn this thesis we explore precision in measurement of buffer overflow and loss probability. We see how buffer overflow probability compares with queuing delay measurements covered in the literature [1]. More specifically, we measure the overflow probability of a packet buffer for various sampling rates to see the effect of sampling rate on the estimation. There are various reasons for measurement in networks; one key context assumed here is Measurement Based Admission Control. We conduct simulation experiments with analytically derived VoIP and bursty traffic parameters,in Matlab, while treating the buffer under consideration as a two-state Markov Chain. We note that estimation error decreases with increase in sampling gap (or in other words precision improves/variance decreases with decrease in sampling rate). We then perform experiments for VoIP and bursty data using NS-2 simulator and record the buffer states generated therein. We see a similar trend of increase in precision with increase in sampling gap. In our simulations, we have mainly considered static traffic passing through the buffer, and we use elastic traffic (TCP) for comparison. We see from our results that that the sampling error becomes constant beyond certain asymptotic level. We thus look into asymptotic error in estimation,for the lowest sampling gap,to establish a lower bound on estimation error for buffer loss probability measurement. We use formulae given in recent literature [2] for computing the experimental and theoretic asymptotic variance of the buffer state traces in our scenarios. We find that the theoretical and experimental asymptotic variance of overflow probability match when sampling a trace of buffer states modelled as a two-state Markov Chain in Matlab. We claim that this is a new approach to computing the lower bound on the measurement of buffer overflow probability, when the buffer states are modelled as a Markov process. Using Markov Chain modelling for buffer overflow we further explore the relationship between sampling rate and accuracy. We find that there is no relationship between sampling gap and bias of estimation. Crucially we go on to show that a more realistic simulation of a packet buffer reveals that the distribution of buffer overflow periods is not always such as to allow simple Markov modelling of the buffer states; while the sojourn periods are exponential for the smaller burst periods, the tail of the distribution does not fit to the same exponential fitting. While our work validates the use of a two-state Markov model for a useful approximation modelling the overflow of a buffer, we have established that earlier work which relies on simple Markovian assumptions will thereby underestimate the error in the measured overflow probabilities

    On the Behavior of the Distributed Coordination Function of IEEE 802.11 with Multirate Capability under General Transmission Conditions

    Full text link
    The aim of this paper is threefold. First, it presents a multi-dimensional Markovian state transition model characterizing the behavior of the IEEE 802.11 protocol at the Medium Access Control layer which accounts for packet transmission failures due to channel errors modeling both saturated and non-saturated traffic conditions. Second, it provides a throughput analysis of the IEEE 802.11 protocol at the data link layer in both saturated and non-saturated traffic conditions taking into account the impact of both the physical propagation channel and multirate transmission in Rayleigh fading environment. The general traffic model assumed is M/M/1/K. Finally, it shows that the behavior of the throughput in non-saturated traffic conditions is a linear combination of two system parameters; the payload size and the packet rates, λ(s)\lambda^{(s)}, of each contending station. The validity interval of the proposed model is also derived. Simulation results closely match the theoretical derivations, confirming the effectiveness of the proposed models.Comment: Submitted to IEEE Transactions on Wireless Communications, October 21, 200

    On the Interaction between TCP and the Wireless Channel in CDMA2000 Networks

    Full text link
    In this work, we conducted extensive active measurements on a large nationwide CDMA2000 1xRTT network in order to characterize the impact of both the Radio Link Protocol and more importantly, the wireless scheduler, on TCP. Our measurements include standard TCP/UDP logs, as well as detailed RF layer statistics that allow observability into RF dynamics. With the help of a robust correlation measure, normalized mutual information, we were able to quantify the impact of these two RF factors on TCP performance metrics such as the round trip time, packet loss rate, instantaneous throughput etc. We show that the variable channel rate has the larger impact on TCP behavior when compared to the Radio Link Protocol. Furthermore, we expose and rank the factors that influence the assigned channel rate itself and in particular, demonstrate the sensitivity of the wireless scheduler to the data sending rate. Thus, TCP is adapting its rate to match the available network capacity, while the rate allocated by the wireless scheduler is influenced by the sender's behavior. Such a system is best described as a closed loop system with two feedback controllers, the TCP controller and the wireless scheduler, each one affecting the other's decisions. In this work, we take the first steps in characterizing such a system in a realistic environment

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    Energy Optimal Transmission Scheduling in Wireless Sensor Networks

    Full text link
    One of the main issues in the design of sensor networks is energy efficient communication of time-critical data. Energy wastage can be caused by failed packet transmission attempts at each node due to channel dynamics and interference. Therefore transmission control techniques that are unaware of the channel dynamics can lead to suboptimal channel use patterns. In this paper we propose a transmission controller that utilizes different "grades" of channel side information to schedule packet transmissions in an optimal way, while meeting a deadline constraint for all packets waiting in the transmission queue. The wireless channel is modeled as a finite-state Markov channel. We are specifically interested in the case where the transmitter has low-grade channel side information that can be obtained based solely on the ACK/NAK sequence for the previous transmissions. Our scheduler is readily implementable and it is based on the dynamic programming solution to the finite-horizon transmission control problem. We also calculate the information theoretic capacity of the finite state Markov channel with feedback containing different grades of channel side information including that, obtained through the ACK/NAK sequence. We illustrate that our scheduler achieves a given throughput at a power level that is fairly close to the fundamental limit achievable over the channel.Comment: Accepted for publication in the IEEE Transactions on Wireless Communication

    Monitoring Networked Applications With Incremental Quantile Estimation

    Full text link
    Networked applications have software components that reside on different computers. Email, for example, has database, processing, and user interface components that can be distributed across a network and shared by users in different locations or work groups. End-to-end performance and reliability metrics describe the software quality experienced by these groups of users, taking into account all the software components in the pipeline. Each user produces only some of the data needed to understand the quality of the application for the group, so group performance metrics are obtained by combining summary statistics that each end computer periodically (and automatically) sends to a central server. The group quality metrics usually focus on medians and tail quantiles rather than on averages. Distributed quantile estimation is challenging, though, especially when passing large amounts of data around the network solely to compute quality metrics is undesirable. This paper describes an Incremental Quantile (IQ) estimation method that is designed for performance monitoring at arbitrary levels of network aggregation and time resolution when only a limited amount of data can be transferred. Applications to both real and simulated data are provided.Comment: This paper commented in: [arXiv:0708.0317], [arXiv:0708.0336], [arXiv:0708.0338]. Rejoinder in [arXiv:0708.0339]. Published at http://dx.doi.org/10.1214/088342306000000583 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore