64 research outputs found

    Quality of Service over Specific Link Layers: state of the art report

    Get PDF
    The Integrated Services concept is proposed as an enhancement to the current Internet architecture, to provide a better Quality of Service (QoS) than that provided by the traditional Best-Effort service. The features of the Integrated Services are explained in this report. To support Integrated Services, certain requirements are posed on the underlying link layer. These requirements are studied by the Integrated Services over Specific Link Layers (ISSLL) IETF working group. The status of this ongoing research is reported in this document. To be more specific, the solutions to provide Integrated Services over ATM, IEEE 802 LAN technologies and low-bitrate links are evaluated in detail. The ISSLL working group has not yet studied the requirements, that are posed on the underlying link layer, when this link layer is wireless. Therefore, this state of the art report is extended with an identification of the requirements that are posed on the underlying wireless link, to provide differentiated Quality of Service

    Simplified methods for next generation IP access networks planning

    Get PDF
    The scope of this paper is to derive a set of simple formulas providing a traffic aggregation in important points of an Internet access networks. The paper shows that the resources associated to the access network depend on user type-, technology and service parameter. Existing calculation methodologies applies on individual approximations whereas this proposal exposes the combined application of these individual and wellknown approximations providing a scheme of generic dimensioning formulas. The dimensioning formulas for a generic applications are derived for the three main levels: connection, session and burst level, and the traffic aggregation is considered through three different and combined variables describing users, accesses and services forming a cube with three axes. The adaptation of corresponding parameters following the different axes allows the calculation of complete access network traffic scenarios, grouped by the so called CASUAL concept: Cube of Accesses / Services / Users. A set of CASUAL based tools allows an estimation of the aggregated traffic in different access points as multiplexers, IP point of presence or edge routers

    Approximation to a behavioral model for estimating traffic aggregation scenarios

    Get PDF
    This article provides a comparison among different methods for estimating the aggregation of Internet traffic resulting from different users, network-access types and corresponding services. Some approximate models usually used as isolated methods are combined with a temporally scaled ON-OFF model with binomial approximations. The aggregation problem is solved using a new form of parameterization based on the composition of the source traffic accordingly to the concrete characteristics of the users, the accesses and the services. This is a new concept, called CASUAL, included within an overall network planning methodology for the design and dimensioning of Next Generation Internet

    Nested QoS: Providing flexible SLAs in shared storage systems

    Get PDF
    The increasing popularity of storage and server consolidation introduces new challenges for resource management, capacity provisioning, and application performance guaranteeing. In addition, the bursty nature of storage workloads results in a large gap between the peak and the average capacity required to meet response time bounds, leading to low overall server utilization and high cost. This situation is driving the development of elastic QoS models that allow clients greater flexibility in adopting SLAs tailored to their workload characteristics and performance requirements, while allowing the service provider opportunities to optimize provisioning and scheduling decisions. This thesis presents a novel service model, called the Nested QoS model, for multiplexing concurrent bursty workloads in shared storage systems. The solution employs two strategies together: systematically classifying requests with a graduated QoS and flexibly scheduling the classified portions. The results show that the Nested QoS model provides (i) performance isolation and strong performance guarantees for both well-behaved and misbehaving workloads; (2) a flexible and auditable elastic SLA definition; and (3) improved server utilization

    Quality of Service over Specific Link Layers: state of the art report

    Get PDF

    A IEEE 802.11e HCCA Scheduler with a Reclaiming Mechanism for Multimedia Applications

    Get PDF
    The QoS offered by the IEEE 802.11e reference scheduler is satisfactory in the case of Constant Bit Rate traffic streams, but not yet in the case of Variable Bit Rate traffic streams, whose variations stress its scheduling behavior. Despite the numerous proposed alternative schedulers with QoS, multimedia applications are looking for refined methods suitable to ensure service differentiation and dynamic update of protocol parameters. In this paper a scheduling algorithm,Unused Time Shifting Scheduler(UTSS), is deeply analyzed. It is designed to cooperate with a HCCA centralized real-time scheduler through the integration of a bandwidth reclaiming scheme, suitable to recover nonexhausted transmission time and assign that to the next polled stations. UTSS dynamically computes with anO(1)complexity transmission time providing an instantaneous resource overprovisioning. The theoretical analysis and the simulation results highlight that this injection of resources does not affect the admission control nor the centralized scheduler but is suitable to improve the performance of the centralized scheduler in terms of mean access delay, transmission queues length, bursts of traffic management, and packets drop rate. These positive effects are more relevant for highly variable bit rate traffic

    Network Factors Influencing Packet Loss in Online Games

    Get PDF
    In real-time communications it is often vital that data arrive at its destination in a timely fashion. Whether it is the user experience of online games, or the reliability of tele-surgery, a reliable, consistent and predictable communications channel between source and destination is important. However, the Internet as we know it was designed to ensure that data will arrive at the desired destination instead of being designed for predictable, low-latency communication. Data traveling from point to point on the Internet is comprised of smaller packages known as packets. As these packets traverse the Internet, they encounter routers or similar devices that will often queue the packets before sending them toward their destination. Queued packets introduces a delay that depends greatly on the router configuration and the number of other packets that exist on the network. In times of high demand, packets may be discarded by the router or even lost in transmission. Protocols exist that retransmit lost packets, but these protocols introduce additional overhead and delays - costs that may be prohibitive in some applications. Being able to predict when packets may be delayed or lost could allow applications to compensate for unreliable data channels. In this thesis I investigate the effects of cross traffic and router configuration on a low bandwidth traffic stream such as that which is common in games. The experiments investigate the effects of cross traffic packet size, bit-rate, inter-packet timing and protocol used. The experiments also investigate router configurations including queue management type and the number of queues. These experiments are compared to real-world data and a mitigation strategy, where n previous packets are bundled with each new packet, is applied to both the simulated data and the real-world captures. The experiments indicate that most of the parameters explored had an impact on the packet loss. However, the real world data and simulated data differ and would require additional work to attempt to apply the lessons learned to real world applications. The mitigation strategy appeared to work well, allowing 90\% of all runs to complete without data loss. However, the mitigation strategy was implemented analytically and the actual implementation and testing has been left for future work

    Decentralising resource management in operating systems

    Get PDF
    This dissertation explores operating system mechanisms to allow resource-aware applications to be involved in the process of managing resources under the premise that these applications (1) potentially have some (implicit) notion of their future resource demands and (2) can adapt their resource demands. The general idea is to provide feedback to resource-aware applications so that they can proactively participate in the management of resources. This approach has the benefit that resource management policies can be removed from central entities and the operating system has only to provide mechanism. Furthermore, in contrast to centralised approaches, application specific features can be more easily exploited. To achieve this aim, I propose to deploy a microeconomic theory, namely congestion or shadow pricing, which has recently received attention for managing congestion in communication networks. Applications are charged based on the potential "damage" they cause to other consumers by using resources. Consumers interpret these congestion charges as feedback signals which they use to adjust their resource consumption. It can be shown theoretically that such a system with consumers merely acting in their own self-interest will converge to a social optimum. This dissertation focuses on the operating system mechanisms required to decentralise resource management this way. In particular it identifies four mechanisms: pricing & charging, credit accounting, resource usage accounting, and multiplexing. While the latter two are mechanisms generally required for the accurate management of resources, pricing & charging and credit accounting present novel mechanisms. It is argued that congestion prices are the correct economic model in this context and provide appropriate feedback to applications. The credit accounting mechanism is necessary to ensure the overall stability of the system by assigning value to credits

    Distributed Policing with Full Utilization and Rate Guarantees

    Get PDF
    A network service provider typically sells service at a fixed traffic rate to customers. This rate is enforced by allowing or dropping packets that pass through, in a process called policing. Distributed policing is a version of the problem where a number of policers must limit their combined traffic allowance to the specified rate. The policers must coordinate their behaviour such that customers are fully allowed the rate they pay for, without receiving too much more, while maintaining some semblance of fairness between packets arriving at one policer versus another. A review of prior solutions shows that most use predictions or estimations to heuristically allocate rates, and thus cannot provide any error bounds or guarantees on the achieved rate under all scenarios. Other solutions may suffer from starvation or unfairness under certain traffic demand patterns. We present a new global ``leaky bucket'' approach that provably prevents starvation, guarantees full utilization, and provides a simple upper bound on the rate allowed under any incoming traffic pattern. We find that the algorithm guarantees a minimum 1/n share of the rate for each policer, and achieves close to max-min fairness in many, but not all cases. We also suggest some experimental modifications that could improve the fairness in practice
    corecore