16,018 research outputs found
DiffServ resource management in IP-based radio access networks
The increasing popularity of the Internet, the flexibility of IP, and the wide deployment of IP technologies, as well as the growth of mobile communications have driven the development of IP-based solutions for wireless networking. The introduction of IP-based transport in Radio Access Networks (RANs) is one of these networking solutions. When compared to traditional IP networks, an IP-based RAN has specific characteristics, due to which, for satisfactory transport functionality, it imposes strict requirements on resource management schemes. In this paper we present the Resource Management in DiffServ (RMD) framework, which extends the DiffServ architecture with new admission control and resource reservation concepts, such that the resource management requirements of an IP-based RAN are met. This framework aims at simplicity, low-cost, and easy implementation, along with good scaling properties. The RMD framework defines two architectural concepts: the Per Hop Reservation (PHR) and the Per Domain Reservation (PDR). As part of the RMD framework a new protocol, the RMD On DemAnd (RODA) Per Hop Reservation (PHR) protocol will be introduced. A key characteristic of the RODA PHR is that it maintains only a single reservation state per PHB in the interior routers of a DiffServ domain, regardless of the number of flows passing through
RMD-QOSM - The Resource Management in Diffserv QoS model
This document describes an NSIS QoS Model for networks that use the Resource Management in Diffserv (RMD) concept. RMD is a technique for adding admission control and preemption function to Differentiated Services (Diffserv) networks. The RMD QoS Model allows devices external to the RMD network to signal reservation requests to edge nodes in the RMD network. The RMD Ingress edge nodes classify the incoming flows into traffic classes and signals resource requests for the corresponding traffic class along the data path to the Egress edge nodes for each flow. Egress nodes reconstitute the original requests and continue forwarding them along the data path towards the final destination. In addition, RMD defines notification functions to indicate overload situations within the domain to the edge nodes
Quality of Service over Specific Link Layers: state of the art report
The Integrated Services concept is proposed as an enhancement to the current Internet architecture, to provide a better Quality of Service (QoS) than that provided by the traditional Best-Effort service. The features of the Integrated Services are explained in this report. To support Integrated Services, certain requirements are posed on the underlying link layer. These requirements are studied by the Integrated Services over Specific Link Layers (ISSLL) IETF working group. The status of this ongoing research is reported in this document. To be more specific, the solutions to provide Integrated Services over ATM, IEEE 802 LAN technologies and low-bitrate links are evaluated in detail. The ISSLL working group has not yet studied the requirements, that are posed on the underlying link layer, when this link layer is wireless. Therefore, this state of the art report is extended with an identification of the requirements that are posed on the underlying wireless link, to provide differentiated Quality of Service
Dynamic Traffic Scheduling and Resource Reservation Algorithms for Output-Buffered Switches
Scheduling algorithms implemented in Internet switches have been
dominated by the best-effort and guaranteed service models. Each of these models
encompasses the extreme ends of the correlation spectrum between service
guarantees and resource utilisation. Recent advancements in adaptive applications
have motivated active research in predictive service models and dynamic resource
reservation algorithms. The OCcuPancy_Adjusting (OCP_A) is a scheduling
algorithm focused on the design of the above-mentioned research areas. Previously,
this algorithm has been analysed for a unified resource reservation and scheduling
algorithm while implementing a tail discarding strategy. However, the differentiated
services provided by the OCP _A algorithm can be further enhanced. In this
dissertation, four new algorithms are proposed. Three are extensions of the OCP _A.
The fourth algorithm is an enhanced version of the Virtual Clock (VC) algorithm,
denoted as ACcelErated (ACE) scheduler. The first algorithm is a priority
scheduling algorithm (i.e. known as the M-Tier algorithm) incorporated with a multitier
dynamic resource reservation algorithm. Periodical resource reallocations are
implemented. Thus. enabling each tier's resource utilisation to converge to its desired Quality of Service (QoS) operating point. In addition. the algorithm
integrates a cross-sharing concept of unused resources between the various
hierarchical levels to exemplify the respective QoS sensitivity. In the second
algorithm. a control parameter is integrated into the M-Tier algorithm to ensure
reduction of delay segregation effects towards packet loss sensitive traffic. The third
algorithm, introduces a delay approximation algorithm to justify packet admission.
The fourth algorithm enhances the VC scheduling algorithm. This is performed via
the incorporation of dynamic features in the computation of the VC scheduling tag.
Subsequently, the delay bound limitation of the parameter is eliminated
LC-PCN: The Load Control PCN Solution
There is an increased interest of simple and scalable resource provisioning solution for Diffserv network. The Load Control PCN (LC-PCN) addresses the following issues:\ud
o Admission Control for real time data flows in stateless Diffserv Domains\ud
o Flow Termination: Termination of flows in case of exceptional events, such as severe congestion after re-routing.\ud
Admission control in a Diffserv stateless domain is a combination of:\ud
o Probing, whereby a probe packet is sent along the forwarding path in a network to determine whether a flow can be admitted based upon the current congestion state of the network\ud
o Admission Control based on data marking, whereby in congestion situations the data packets are marked to notify the PCN-egress-node that a congestion occurred on a particular PCN-ingress-node to PCN-egress-node path.\ud
\ud
The scheme provides the capability of controlling the traffic load in the network without requiring signaling or any per-flow processing in the PCN-interior-nodes. The complexity of Load Control is kept to a minimum to make implementation simple.\u
Resource management in IP-based radio access networks
IP is being considered to be used in the Radio Access Network (RAN) of UMTS. It is of paramount importance to be able to provide good QoS guarantees to real time services in such an IP-based RAN. QoS in IP networks is most efficiently provided with Differentiated services (Diffserv). However, currently Diffserv mainly specifies Per Hop Behaviors (PHB). Proper mechanisms for admission control and resource reservation have not yet been defined. A new resource management concept in the IP-based RAN is needed to offer QoS guarantees to real time services. We investigate the current Diffserv mechanisms and contribute to development of a new resource management protocol. We focus on the load control algorithm [9], which is an attempt to solve the problem of admission control and resource reservation in IP-based networks. In this document we present some load control issues and propose to enhance the load control protocol with the Measurement Based Admission Control (MBAC) concept. With this enhancement the traffic load in the IP-based RAN can be estimated, since the ingress router in the network path can be notified by marking packets with the resource state information. With this knowledge, the ingress router can perform admission control to keep the IP-based RAN stable with a high utilization even in overload situations
RMD (Resource Management in Diffserv) QoS-NSLP model
This draft describes a local QoS model, denoted as Resource Management in Diffserv (RMD) QoS model, for NSIS that extends the IETF Differentiated Services (Diffserv) architecture with a scalable admission control and resource reservation concept. The specification of this QoS model includes a description of its QoS parameter information, as well as how that information should be treated or interpreted in the network
Recommended from our members
Towards scalable end-to-end QoS provision for VoIP applications
The growth of the Internet and the development of its new applications have increased the demand for providing a certain level of resource assurance and service support. The concept of ensuring quality of service (QoS) has been introduced in order to provide the support and assurance for these services. Different QoS mechanisms, such as integrated services (IntServ) and differentiated services (DiffServ), have been developed and introduced to provide different levels of QoS provision. However, IntServ can suffer from scalability issues that make it infeasible for large-scale network implementations. On the other hand, the aggregated-based per-flow technique of DiffServ does not provide such an end-to-end QoS guarantee. Recently, the IETF have proposed a new QoS architecture that implements IntServ over DiffServ in order to provide an end-to-end QoS for scalable networks. Hence, it became possible to provide and support a certain level of QoS for some delay sensitive and bandwidth-demanding applications such as voice over Internet Protocol (VoIP). With regard to VoIP applications, delay, jitter and packet loss are crucial issues that have to be taken into consideration for any VoIP system design and such parameters need a distinct level of QoS support
RMD-QOSM: The NSIS Quality-of-Service Model for Resource Management in Diffserv
This document describes a Next Steps in Signaling (NSIS) Quality-of- Service (QoS) Model for networks that use the Resource Management in Diffserv (RMD) concept. RMD is a technique for adding admission control and preemption function to Differentiated Services (Diffserv) networks. The RMD QoS Model allows devices external to the RMD network to signal reservation requests to Edge nodes in the RMD network. The RMD Ingress Edge nodes classify the incoming flows into traffic classes and signals resource requests for the corresponding traffic class along the data path to the Egress Edge nodes for each flow. Egress nodes reconstitute the original requests and continue forwarding them along the data path towards the final destination. In addition, RMD defines notification functions to indicate overload situations within the domain to the Edge nodes
- …