819 research outputs found

    Methods of Congestion Control for Adaptive Continuous Media

    Get PDF
    Since the first exchange of data between machines in different locations in early 1960s, computer networks have grown exponentially with millions of people now using the Internet. With this, there has also been a rapid increase in different kinds of services offered over the World Wide Web from simple e-mails to streaming video. It is generally accepted that the commonly used protocol suite TCP/IP alone is not adequate for a number of modern applications with high bandwidth and minimal delay requirements. Many technologies are emerging such as IPv6, Diffserv, Intserv etc, which aim to replace the onesize-fits-all approach of the current lPv4. There is a consensus that the networks will have to be capable of multi-service and will have to isolate different classes of traffic through bandwidth partitioning such that, for example, low priority best-effort traffic does not cause delay for high priority video traffic. However, this research identifies that even within a class there may be delays or losses due to congestion and the problem will require different solutions in different classes. The focus of this research is on the requirements of the adaptive continuous media class. These are traffic flows that require a good Quality of Service but are also able to adapt to the network conditions by accepting some degradation in quality. It is potentially the most flexible traffic class and therefore, one of the most useful types for an increasing number of applications. This thesis discusses the QoS requirements of adaptive continuous media and identifies an ideal feedback based control system that would be suitable for this class. A number of current methods of congestion control have been investigated and two methods that have been shown to be successful with data traffic have been evaluated to ascertain if they could be adapted for adaptive continuous media. A novel method of control based on percentile monitoring of the queue occupancy is then proposed and developed. Simulation results demonstrate that the percentile monitoring based method is more appropriate to this type of flow. The problem of congestion control at aggregating nodes of the network hierarchy, where thousands of adaptive flows may be aggregated to a single flow, is then considered. A unique method of pricing mean and variance is developed such that each individual flow is charged fairly for its contribution to the congestion

    Delay Bound: Fractal Traffic Passes through Network Servers

    Get PDF
    Delay analysis plays a role in real-time systems in computer communication networks. This paper gives our results in the aspect of delay analysis of fractal traffic passing through servers. There are three contributions presented in this paper. First, we will explain the reasons why conventional theory of queuing systems ceases in the general sense when arrival traffic is fractal. Then, we will propose a concise method of delay computation for hard real-time systems as shown in this paper. Finally, the delay computation of fractal traffic passing through severs is presented

    Cross-layer optimisation of quality of experience for video traffic

    Get PDF
    Realtime video traffic is currently the dominant network traffic and is set to increase in volume for the foreseeable future. As this traffic is bursty, providing perceptually good video quality is a challenging task. Bursty traffic refers to inconsistency of the video traffic level. It is at high level sometimes while is at low level at some other times. Many video traffic measurement algorithms have been proposed for measurement-based admission control. Despite all of this effort, there is no entirely satisfactory admission algorithm for variable rate flows. Furthermore, video frames are subjected to loss and delay which cause quality degradation when sent without reacting to network congestion. The perceived Quality of Experience (QoE)-number of sessions trade-off can be optimised by exploiting the bursty nature of video traffic. This study introduces a cross-layer QoE-aware optimisation architecture for video traffic. QoE is a measure of the user's perception of the quality of a network service. The architecture addresses the problem of QoE degradation in a bottleneck network. It proposes that video sources at the application layer adapt their rate to the network environment by dynamically controlling their transmitted bit rate. Whereas the edge of the network protects the quality of active video sessions by controlling the acceptance of new sessions through a QoE-aware admission control. In particular, it seeks the most efficient way of accepting new video sessions and adapts sending rates to free up resources for more sessions whilst maintaining the QoE of the current sessions. As a pathway to the objective, the performance of the video flows that react to the network load by adapting the sending rate was investigated. Although dynamic rate adaptation enhances the video quality, accepting more sessions than a link can accommodate will degrade the QoE. The video's instantaneous aggregate rate was compared to the average aggregate rate which is a calculated rate over a measurement time window. It was found that there is no substantial difference between the two rates except for a small number of video flows, long measurement window, or fast moving contents (such as sport), in which the average is smaller than the instantaneous rate. These scenarios do not always represent the reality. The finding discussed above was the main motivation for proposing a novel video traffic measurement algorithm that is QoE-aware. The algorithm finds the upper limit of the video total rate that can exceed a specific link capacity without the QoE degradation of ongoing video sessions. When implemented in a QoE-aware admission control, the algorithm managed to maintain the QoE for a higher number of video session compared to the calculated rate-based admission controls such as the Internet Engineering Task Force (IETF) standard Pre-Congestion Notification (PCN)-based admission control. Subjective tests were conducted to involve human subjects in rating of the quality of videos delivered with the proposed measurement algorithm. Mechanisms proposed for optimising the QoE of video traffic were surveyed in detail in this dissertation and the challenges of achieving this objective were discussed. Finally, the current rate adaptation capability of video applications was combined with the proposed QoE-aware admission control in a QoE-aware cross-layer architecture. The performance of the proposed architecture was evaluated against the architecture in which video applications perform rate adaptation without being managed by the admission control component. The results showed that our architecture optimises the mean Mean Opinion Score (MOS) and number of successful decoded video sessions without compromising the delay. The algorithms proposed in this study were implemented and evaluated using Network Simulator-version 2 (NS-2), MATLAB, Evalvid and Evalvid-RA. These software tools were selected based on their use in similar studies and availability at the university. Data obtained from the simulations was analysed with analysis of variance (ANOVA) and the Cumulative Distribution Functions (CDF) for the performance metrics were calculated. The proposed architecture will contribute to the preparation for the massive growth of video traffic. The mathematical models of the proposed algorithms contribute to the research community

    Understanding Fairness and its Impact on Quality of Service in IEEE 802.11

    Full text link
    The Distributed Coordination Function (DCF) aims at fair and efficient medium access in IEEE 802.11. In face of its success, it is remarkable that there is little consensus on the actual degree of fairness achieved, particularly bearing its impact on quality of service in mind. In this paper we provide an accurate model for the fairness of the DCF. Given M greedy stations we assume fairness if a tagged station contributes a share of 1/M to the overall number of packets transmitted. We derive the probability distribution of fairness deviations and support our analytical results by an extensive set of measurements. We find a closed-form expression for the improvement of long-term over short-term fairness. Regarding the random countdown values we quantify the significance of their distribution whereas we discover that fairness is largely insensitive to the distribution parameters. Based on our findings we view the DCF as emulating an ideal fair queuing system to quantify the deviations from a fair rate allocation. We deduce a stochastic service curve model for the DCF to predict packet delays in IEEE 802.11. We show how a station can estimate its fair bandwidth share from passive measurements of its traffic arrivals and departures

    A review of connection admission control algorithms for ATM networks

    Get PDF
    The emergence of high-speed networks such as those with ATM integrates large numbers of services with a wide range of characteristics. Admission control is a prime instrument for controlling congestion in the network. As part of connection services to an ATM system, the Connection Admission Control (CAC) algorithm decides if another call or connection can be admitted to the Broadband Network. The main task of the CAC is to ensure that the broadband resources will not saturate or overflow within a very small probability. It limits the connections and guarantees Quality of Service for the new connection. The algorithm for connection admission is crucial in determining bandwidth utilisation efficiency. With statistical multiplexing more calls can be allocated on a network link, while still maintaining the Quality of Service specified by the connection with traffic parameters and type of service. A number of algorithms for admission control for Broadband Services with ATM Networks are described and compared for performance under different traffic loads. There is a general description of the ATM Network as an introduction. Issues to do with source distributions and traffic models are explored in Chapter 2. Chapter 3 provides an extensive presentation of the CAC algorithms for ATM Broadband Networks. The ideas about the Effective Bandwidth are reviewed in Chapter 4, and a different approach to admission control using online measurement is presented in Chapter 5. Chapter 6 has the numerical evaluation of four of the key algorithms, with simulations. Finally Chapter 7 has conclusions of the findings and explores some possibilities for further work

    Resource management research in ethernet passive optical networks

    Get PDF
    The last decades, we have witnessed different phenomenology in the telecommunications sector. One of them is the widespread use of the Internet, which has brought a sharp increase in traffic, forcing suppliers to continuously expand the capacity of networks. In the near future, Internet will be composed of long-range highspeed optical networks; a number of wireless networks at the edge; and, in between, several access technologies. Today one of the main problems of the Internet is the bottleneck in the access segment. To address this issue the Passive Optical Networks (PONs) are very likely to succeed, due to their simplicity, low-cost, and increased bandwidth. A PON is made up of fiber optic cabling and passive splitters and couplers that distribute an optical signal to connectors that terminate each fiber segment. Among the different PON technologies, the Ethernet-PON (EPON) is a great alternative to satisfy operator and user needs, due to its cost, flexibility and interoperability with other technologies. One of the most interesting challenges in such technologies relates to the scheduling and allocation of resources in the upstream (shared) channel, i.e., the resource management. The aim of this thesis is to study and evaluate current contributions and propose new efficient solutions to address the resource management issues mainly in EPON. Key issues in this context are future end-user needs, quality of service (QoS) support, energy-saving and optimized service provisioning for real-time and elastic flows. This thesis also identifies research opportunities, issue recommendations and proposes novel mechanisms associated with access networks based on optical fiber technologies.Postprint (published version

    Non-stationary service curves : model and estimation method with application to cellular sleep scheduling

    Get PDF
    In today’s computer networks, short-lived flows are predominant. Consequently, transient start-up effects such as the connection establishment in cellular networks have a significant impact on the performance. Although various solutions are derived in the fields of queuing theory, available bandwidths, and network calculus, the focus is, e.g., about the mean wake-up times, estimates of the available bandwidth, which consist either out of a single value or a stationary function and steady-state solutions for backlog and delay. Contrary, the analysis during transient phases presents fundamental challenges that have only been partially solved and is therefore understood to a much lesser extent. To better comprehend systems with transient characteristics and to explain their behavior, this thesis contributes a concept of non-stationary service curves that belong to the framework of stochastic network calculus. Thereby, we derive models of sleep scheduling including time-variant performance bounds for backlog and delay. We investigate the impact of arrival rates and different duration of wake-up times, where the metrics of interest are the transient overshoot and relaxation time. We compare a time-variant and a time-invariant description of the service with an exact solution. To avoid probabilistic and maybe unpredictable effects from random services, we first choose a deterministic description of the service and present results that illustrate that only the time-variant service curve can follow the progression of the exact solution. In contrast, the time-invariant service curve remains in the worst-case value. Since in real cellular networks, it is well known that the service and sleep scheduling procedure is random, we extend the theory to the stochastic case and derive a model with a non-stationary service curve based on regenerative processes. Further, the estimation of cellular network’s capacity/ available bandwidth from measurements is an important topic that attracts research, and several works exist that obtain an estimate from measurements. Assuming a system without any knowledge about its internals, we investigate existing measurement methods such as the prevalent rate scanning and the burst response method. We find fundamental limitations to estimate the service accurately in a time-variant way, which can be explained by the non-convexity of transient services and their super-additive network processes. In order to overcome these limitations, we derive a novel two-phase probing technique. In the first step, the shape of a minimal probe is identified, which we then use to obtain an accurate estimate of the unknown service. To demonstrate the minimal probing method’s applicability, we perform a comprehensive measurement campaign in cellular networks with sleep scheduling (2G, 3G, and 4G). Here, we observe significant transient backlogs and delay overshoots that persist for long relaxation times by sending constant-bit-rate traffic, which matches the findings from our theoretical model. Contrary, the minimal probing method shows another strength: sending the minimal probe eliminates the transient overshoots and relaxation times
    corecore