21 research outputs found

    Performance Modeling and Analysis of Wireless Local Area Networks with Bursty Traffic

    Get PDF
    The explosive increase in the use of mobile digital devices has posed great challenges in the design and implementation of Wireless Local Area Networks (WLANs). Ever-increasing demands for high-speed and ubiquitous digital communication have made WLANs an essential feature of everyday life. With audio and video forming the highest percentage of traffic generated by multimedia applications, a huge demand is placed for high speed WLANs that provide high Quality-of-Service (QoS) and can satisfy end user’s needs at a relatively low cost. Providing video and audio contents to end users at a satisfactory level with various channel quality and current battery capacities requires thorough studies on the properties of such traffic. In this regard, Medium Access Control (MAC) protocol of the 802.11 standard plays a vital role in the management and coordination of shared channel access and data transmission. Therefore, this research focuses on developing new efficient analytical models that evaluate the performance of WLANs and the MAC protocol in the presence of bursty, correlated and heterogeneous multimedia traffic using Batch Markovian Arrival Process (BMAP). BMAP can model the correlation between different packet size distributions and traffic rates while accurately modelling aggregated traffic which often possesses negative statistical properties. The research starts with developing an accurate traffic generator using BMAP to capture the existing correlations in multimedia traffics. For validation, the developed traffic generator is used as an arrival process to a queueing model and is analyzed based on average queue length and mean waiting time. The performance of BMAP/M/1 queue is studied under various number of states and maximum batch sizes of BMAP. The results clearly indicate that any increase in the number of states of the underlying Markov Chain of BMAP or maximum batch size, lead to higher burstiness and correlation of the arrival process, prompting the speed of the queue towards saturation. The developed traffic generator is then used to model traffic sources in IEEE 802.11 WLANs, measuring important QoS metrics of throughput, end-to-end delay, frame loss probability and energy consumption. Performance comparisons are conducted on WLANs under the influence of multimedia traffics modelled as BMAP, Markov Modulated Poisson Process and Poisson Process. The results clearly indicate that bursty traffics generated by BMAP demote network performance faster than other traffic sources under moderate to high loads. The model is also used to study WLANs with unsaturated, heterogeneous and bursty traffic sources. The effects of traffic load and network size on the performance of WLANs are investigated to demonstrate the importance of burstiness and heterogeneity of traffic on accurate evaluation of MAC protocol in wireless multimedia networks. The results of the thesis highlight the importance of taking into account the true characteristics of multimedia traffics for accurate evaluation of the MAC protocol in the design and analysis of wireless multimedia networks and technologies

    A Hardware Testbed for Measuring IEEE 802.11g DCF Performance

    Get PDF
    The Distributed Coordination Function (DCF) is the oldest and most widely-used IEEE 802.11 contention-based channel access control protocol. DCF adds a significant amount of overhead in the form of preambles, frame headers, randomised binary exponential back-off and inter-frame spaces. Having accurate and verified performance models for DCF is thus integral to understanding the performance of IEEE 802.11 as a whole. In this document DCF performance is measured subject to two different workload models using an IEEE 802.11g test bed. Bianchi proposed the first accurate analytic model for measuring the performance of DCF. The model calculates normalised aggregate throughput as a function of the number of stations contending for channel access. The model also makes a number of assumptions about the system, including saturation conditions (all stations have a fixed-length packet to send at all times), full-connectivity between stations, constant collision probability and perfect channel conditions. Many authors have extended Bianchi's machine model to correct certain inconsistencies with the standard, while very few have considered alternative workload models. Owing to the complexities associated with prototyping, most models are verified against simulations and not experimentally using a test bed. In addition to a saturation model we considered a more realistic workload model representing wireless Internet traffic. Producing a stochastic model for such a workload was a challenging task, as usage patterns change significantly between users and over time. We implemented and compared two Markov Arrival Processes (MAPs) for packet arrivals at each client - a Discrete-time Batch Markovian Arrival Process (D-BMAP) and a modified Hierarchical Markov Modulated Poisson Process (H-MMPP). Both models had parameters drawn from the same wireless trace data. It was found that, while the latter model exhibits better Long Range Dependency at the network level, the former represented traces more accurately at the client-level, which made it more appropriate for the test bed experiments. A nine station IEEE 802.11 test bed was constructed to measure the real world performance of the DCF protocol experimentally. The stations used IEEE 802.11g cards based on the Atheros AR5212 chipset and ran a custom Linux distribution. The test bed was moved to a remote location where there was no measured risk of interference from neighbouring radio transmitters in the same band. The DCF machine model was fixed and normalised aggregate throughput was measured for one through to eight contending stations, subject to (i) saturation with fixed packet length equal to 1000 bytes, and (ii) the D-BMAP workload model for wireless Internet traffic. Control messages were forwarded on a separate wired backbone network so that they did not interfere with the experiments. Analytic solver software was written to calculate numerical solutions for thee popular analytic models for DCF and compared the solutions to the saturation test bed experiments. Although the normalised aggregate throughput trends were the same, it was found that as the number of contending stations increases, so the measured aggregate DCF performance diverged from all three analytic model's predictions; for every station added to the network normalised aggregate throughput was measured lower than analytically predicted. We conclude that some property of the test bed was not captured by the simulation software used to verify the analytic models. The D-BMAP experiments yielded a significantly lower normalised aggregate throughput than the saturation experiments, which is a clear result of channel underutilisation. Although this is a simple result, it highlights the importance of the traffic model on network performance. Normalised aggregate throughput appeared to scale more linearly when compared to the RTS/CTS access mechanism, but no firm conclusion could be drawn at 95% confidence. We conclude further that, although normalised aggregate throughput is appropriate for describing overall channel utilisation in the steady state, jitter, response time and error rate are more important performance metrics in the case of bursty traffic

    Shipment Consolidation in Discrete Time and Discrete Quantity: Matrix-Analytic Methods

    Get PDF
    Shipment consolidation is a logistics strategy whereby many small shipments are combined into a few larger loads. The economies of scale achieved by shipment consolidation help in reducing the transportation costs and improving the utilization of logistics resources. The fundamental questions about shipment consolidation are i) to how large a size should the consolidated loads be allowed to accumulate? And ii) when is the best time to dispatch such loads? The answers to these questions lie in the set of decision rules known as shipment consolidation policies. A number of studies have been done in an attempt to find the optimal consolidation policy. However, these studies are restricted to only a few types of consolidation policies and are constrained by the input parameters, mainly the order arrival process and the order weight distribution. Some results on the optimal policy parameters have been obtained, but they are limited to a couple of specific types of policies. No comprehensive method has yet been developed which allows the evaluation of different types of consolidation policies in general, and permits a comparison of their performance levels. Our goal in this thesis is to develop such a method and use it to evaluate a variety of instances of shipment consolidation problem and policies. In order to achieve that goal, we will venture to use matrix-analytic methods to model and solve the shipment consolidation problem. The main advantage of applying such methods is that they can help us create a more versatile and accurate model while keeping the difficulties of computational procedures in check. More specifically, we employ a discrete batch Markovian arrival process (BMAP) to model the weight-arrival process, and for some special cases, we use phase-type (PH) distributions to represent order weights. Then we model a dispatch policy by a discrete monotonic function, and construct a discrete time Markov chain for the shipment consolidation process. Borrowing an idea from matrix-analytic methods, we develop an efficient algorithm for computing the steady state distribution of the Markov chain and various performance measures such as i) the mean accumulated weight per load, ii) the average dispatch interval and iii) the average delay per order. Lastly, after specifying the cost structures, we will compute the expected long-run cost per unit time for both the private carriage and common carriage cases

    Error analysis of structured Markov chains

    Get PDF

    Control and inference of structured Markov models

    Get PDF

    Uplink multiple access techniques for satellite communication systems

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (leaves 90-92).by Christopher J. Karpinsky.M.S

    Inferring Queueing Network Models from High-precision Location Tracking Data

    No full text
    Stochastic performance models are widely used to analyse the performance and reliability of systems that involve the flow and processing of customers. However, traditional methods of constructing a performance model are typically manual, time-consuming, intrusive and labour-intensive. The limited amount and low quality of manually-collected data often lead to an inaccurate picture of customer flows and poor estimates of model parameters. Driven by advances in wireless sensor technologies, recent real-time location systems (RTLSs) enable the automatic, continuous and unintrusive collection of high-precision location tracking data, in both indoor and outdoor environment. This high-quality data provides an ideal basis for the construction of high-fidelity performance models. This thesis presents a four-stage data processing pipeline which takes as input high-precision location tracking data and automatically constructs a queueing network performance model approximating the underlying system. The first two stages transform raw location traces into high-level “event logs” recording when and for how long a customer entity requests service from a server entity. The third stage infers the customer flow structure and extracts samples of time delays involved in the system; including service time, customer interarrival time and customer travelling time. The fourth stage parameterises the service process and customer arrival process of the final output queueing network model. To collect large-enough location traces for the purpose of inference by conducting physical experiments is expensive, labour-intensive and time-consuming. We thus developed LocTrack- JINQS, an open-source simulation library for constructing simulations with location awareness and generating synthetic location tracking data. Finally we examine the effectiveness of the data processing pipeline through four case studies based on both synthetic and real location tracking data. The results show that the methodology performs with moderate success in inferring multi-class queueing networks composed of single-server queues with FIFO, LIFO and priority-based service disciplines; it is also capable of inferring different routing policies, including simple probabilistic routing, class-based routing and shortest-queue routing
    corecore