116 research outputs found

    Providing guaranteed QoS in the hose-modeled VPN

    Get PDF
    With the development of the Internet, Internet service providers (ISPs) are required to offer revenue-generating and value-added services instead of only providing bandwidth and access services. Virtual Private Network (VPN) is one of the most important value-added services for ISPs. The classical VPN service is provided by implementing layer 2 technologies, either Frame Relay (FR) or Asynchronous Transfer Mode (ATM). With FR or ATM, virtual circuits are created before data delivery. Since the bandwidth and buffers are reserved, the QoS requirements can be naturally guaranteed. In the past few years, layer 3 VPN technologies are widely deployed due to the desirable performance in terms of flexibility, scalability and simplicity. Layer 3 VPNs are built upon IP tunnels, e.g., by using PPTP, L2TP or IPSec. Since IP is best-of-effort in nature, the QoS requirement cannot be guaranteed in layer 3 VPNs. Actually, layer 3 VPN service can only provide secure connectivity, i.e., protecting and authenticating IP packets between gateways or hosts in a VPN. Without doubt, with more applications on voice, audio and video being used in the Internet, the provision of QoS is one of the most important parts of the emerging services provided by ISPs. An intriguing question is: Is it possible to obtain the best of both layer 2 and 3 VPN? Is it possible to provide guaranteed or predictable QoS, as in layer 2 VPNs, while maintaining the flexibility and simplicity in layer 3 VPN? This question is the starting point of this study. The recently proposed hose model for VPN possesses desirable properties in terms of flexibility, scalability and multiplexing gain. However, the classic fair bandwidth allocation schemes and weighted fair queuing schemes raise the issue of low overall utilization in this model. A new fluid model for provider-provisioned virtual private network (PPVPN) is proposed in this dissertation. Based on the proposed model, an idealized fluid bandwidth allocation scheme is developed. This scheme is proven, analytically, to have the following properties: 1) maximize the overall throughput of the VPN without compromising fairness; 2) provide a mechanism that enables the VPN customers to allocate the bandwidth according to their requirements by assigning different weights to different hose flows, and thus obtain the predictable QoS performance; and 3) improve the overall throughput of the ISPs\u27 network. To approximate the idealized fluid scheme in the real world, the 2-dimensional deficit round robin (2-D DRR and 2-D DRR+) schemes are proposed. The integration of the proposed schemes with the best-effort traffic within the framework of virtual-router-based VPN is also investigated. The 2-D DRR and 2-D DER-+ schemes can be extended to multi-dimensional schemes to be employed in those applications which require a hierarchical scheduling architecture. To enhance the scalability, a more scalable non-per-flow-based scheme for output queued switches is developed as well, and the integration of this scheme within the framework of the MPLS VPN and applications for multicasting traffics is discussed. The performance and properties of these schemes are analyzed

    IP-based virtual private networks and proportional quality of service differentiation

    Get PDF
    IP-based virtual private networks (VPNs) have the potential of delivering cost-effective, secure, and private network-like services. Having surveyed current enabling techniques, an overall picture of IP VPN implementations is presented. In order to provision the equivalent quality of service (QoS) of legacy connection-oriented layer 2 VPNs (e.g., Frame Relay and ATM), IP VPNs have to overcome the intrinsically best effort characteristics of the Internet. Subsequently, a hierarchical QoS guarantee framework for IP VPNs is proposed, stitching together development progresses from recent research and engineering work. To differentiate IP VPN QoS, the proportional QoS differentiation model, whose QoS specification granularity compromises that of IntServ and Diffserv, emerges as a potential solution. The investigation of its claimed capability of providing the predictable and controllable QoS differentiation is then conducted. With respect to the loss rate differentiation, the packet shortage phenomenon shown in two classical proportional loss rate (PLR) dropping schemes is studied. On the pursuit of a feasible solution, the potential of compromising the system resource, that is, the buffer, is ruled out; instead, an enhanced debt-aware mechanism is suggested to relieve the negative effects of packet shortage. Simulation results show that debt-aware partially curbs the biased loss rate ratios, and improves the queueing delay performance as well. With respect to the delay differentiation, the dynamic behavior of the average delay difference between successive classes is first analyzed, aiming to gain insights of system dynamics. Then, two classical delay differentiation mechanisms, that is,proportional average delay (PAD) and waiting time priority (WTP), are simulated and discussed. Based on observations on their differentiation performances over both short and long time periods, a combined delay differentiation (CDD) scheme is introduced. Simulations are utilized to validate this method. Both loss and delay differentiations are based on a series of differentiation parameters. Though previous work on the selection of delay differentiation parameters has been presented, that of loss differentiation parameters mostly relied on network operators\u27 experience. A quantitative guideline, based on the principles of queueing and optimization, is then proposed to compute loss differentiation parameters. Aside from analysis, the new approach is substantiated by numerical results

    Networking Mechanisms for Delay-Sensitive Applications

    Get PDF
    The diversity of applications served by the explosively growing Internet is increasing. In particular, applications that are sensitive to end-to-end packet delays become more common and include telephony, video conferencing, and networked games. While the single best-effort service of the current Internet favors throughput-greedy traffic by equipping congested links with large buffers, long queuing at the congested links hurts the delay-sensitive applications. Furthermore, while numerous alternative architectures have been proposed to offer diverse network services, the innovative alternatives failed to gain widespread end-to-end deployment. This dissertation explores different networking mechanisms for supporting low queueing delay required by delay-sensitive applications. In particular, it considers two different approaches. The first one assumes employing congestion control protocols for the traffic generated by the considered class of applications. The second approach relies on the router operation only and does not require support from end hosts

    Uncertain Bandwidth Calculation in Networks with Non-Linear Services

    Get PDF
    ABSTRACT : The data transfer rate in maximum level of network. It is used to observe how much amount of data can send through connection. Most of the networks are fail to guess the bandwidth availability accurately in the wireless setting. So that advance bandwidth reservation can become a critical task to improve the network resources utilization. This type of system will provide increased inconsistency of wireless channel conditions difficult for bandwidth calculation. To overcome these problems we are introducing a scheme as "Bandwidth Recycling", (i.e.) to recycle the unused bandwidth without changing the existing bandwidth reservation. We are calculating the each node in the networks bandwidth calculation considering through the queries for long time periods. For small scale networks we are using optimal algorithm with exponential time complexity, for large scale networks we are developing the heuristics with polynomial time complexity and we are using token bucket algorithm to avoid packet loss while it travelling in the network. By using this bandwidth calculation we can achieve both good accuracy and accurate levels in the networks for each node

    Service management for multi-domain Active Networks

    Get PDF
    The Internet is an example of a multi-agent system. In our context, an agent is synonymous with network operators, Internet service providers (ISPs) and content providers. ISPs mutually interact for connectivity's sake, but the fact remains that two peering agents are inevitably self-interested. Egoistic behaviour manifests itself in two ways. Firstly, the ISPs are able to act in an environment where different ISPs would have different spheres of influence, in the sense that they will have control and management responsibilities over different parts of the environment. On the other hand, contention occurs when an ISP intends to sell resources to another, which gives rise to at least two of its customers sharing (hence contending for) a common transport medium. The multi-agent interaction was analysed by simulating a game theoretic approach and the alignment of dominant strategies adopted by agents with evolving traits were abstracted. In particular, the contention for network resources is arbitrated such that a self-policing environment may emerge from a congested bottleneck. Over the past 5 years, larger ISPs have simply peddled as fast as they could to meet the growing demand for bandwidth by throwing bandwidth at congestion problems. Today, the dire financial positions of Worldcom and Global Crossing illustrate, to a certain degree, the fallacies of over-provisioning network resources. The proposed framework in this thesis enables subscribers of an ISP to monitor and police each other's traffic in order to establish a well-behaved norm in utilising limited resources. This framework can be expanded to other inter-domain bottlenecks within the Internet. One of the main objectives of this thesis is also to investigate the impact on multi-domain service management in the future Internet, where active nodes could potentially be located amongst traditional passive routers. The advent of Active Networking technology necessitates node-level computational resource allocations, in addition to prevailing resource reservation approaches for communication bandwidth. Our motivation is to ensure that a service negotiation protocol takes account of these resources so that the response to a specific service deployment request from the end-user is consistent and predictable. To promote the acceleration of service deployment by means of Active Networking technology, a pricing model is also evaluated for computational resources (e.g., CPU time and memory). Previous work in these areas of research only concentrate on bandwidth (i.e., communication) - related resources. Our pricing approach takes account of both guaranteed and best-effort service by adapting the arbitrage theorem from financial theory. The central tenet for our approach is to synthesise insights from different disciplines to address problems in data networks. The greater parts of research experience have been obtained during direct and indirect participation in the 1ST-10561 project known as FAIN (Future Active IP Networks) and ACTS-AC338 project called MIAMI (Mobile Intelligent Agent for Managing the Information Infrastructure). The Inter-domain Manager (IDM) component was integrated as an integral part of the FAIN policy-based network management systems (PBNM). Its monitoring component (developed during the MIAMI project) learns about routing changes that occur within a domain so that the management system and the managed nodes have the same topological view of the network. This enabled our reservation mechanism to reserve resources along the existing route set up by whichever underlying routing protocol is in place

    Design of Overlay Networks for Internet Multicast - Doctoral Dissertation, August 2002

    Get PDF
    Multicast is an efficient transmission scheme for supporting group communication in networks. Contrasted with unicast, where multiple point-to-point connections must be used to support communications among a group of users, multicast is more efficient because each data packet is replicated in the network – at the branching points leading to distinguished destinations, thus reducing the transmission load on the data sources and traffic load on the network links. To implement multicast, networks need to incorporate new routing and forwarding mechanisms in addition to the existing are not adequately supported in the current networks. The IP multicast are not adequately supported in the current networks. The IP multicast solution has serious scaling and deployment limitations, and cannot be easily extended to provide more enhanced data services. Furthermore, and perhaps most importantly, IP multicast has ignored the economic nature of the problem, lacking incentives for service providers to deploy the service in wide area networks. Overlay multicast holds promise for the realization of large scale Internet multicast services. An overlay network is a virtual topology constructed on top of the Internet infrastructure. The concept of overlay networks enables multicast to be deployed as a service network rather than a network primitive mechanism, allowing deployment over heterogeneous networks without the need of universal network support. This dissertation addresses the network design aspects of overlay networks to provide scalable multicast services in the Internet. The resources and the network cost in the context of overlay networks are different from that in conventional networks, presenting new challenges and new problems to solve. Our design goal are the maximization of network utility and improved service quality. As the overall network design problem is extremely complex, we divide the problem into three components: the efficient management of session traffic (multicast routing), the provisioning of overlay network resources (bandwidth dimensioning) and overlay topology optimization (service placement). The combined solution provides a comprehensive procedure for planning and managing an overlay multicast network. We also consider a complementary form of overlay multicast called application-level multicast (ALMI). ALMI allows end systems to directly create an overlay multicast session among themselves. This gives applications the flexibility to communicate without relying on service provides. The tradeoff is that users do not have direct control on the topology and data paths taken by the session flows and will typically get lower quality of service due to the best effort nature of the Internet environment. ALMI is therefore suitable for sessions of small size or sessions where all members are well connected to the network. Furthermore, the ALMI framework allows us to experiment with application specific components such as data reliability, in order to identify a useful set of communication semantic for enhanced data services

    A cross-layer middleware architecture for time and safety critical applications in MANETs

    Get PDF
    Mobile Ad hoc Networks (MANETs) can be deployed instantaneously and adaptively, making them highly suitable to military, medical and disaster-response scenarios. Using real-time applications for provision of instantaneous and dependable communications, media streaming, and device control in these scenarios is a growing research field. Realising timing requirements in packet delivery is essential to safety-critical real-time applications that are both delay- and loss-sensitive. Safety of these applications is compromised by packet loss, both on the network and by the applications themselves that will drop packets exceeding delay bounds. However, the provision of this required Quality of Service (QoS) must overcome issues relating to the lack of reliable existing infrastructure, conservation of safety-certified functionality. It must also overcome issues relating to the layer-2 dynamics with causal factors including hidden transmitters and fading channels. This thesis proposes that bounded maximum delay and safety-critical application support can be achieved by using cross-layer middleware. Such an approach benefits from the use of established protocols without requiring modifications to safety-certified ones. This research proposes ROAM: a novel, adaptive and scalable cross-layer Real-time Optimising Ad hoc Middleware framework for the provision and maintenance of performance guarantees in self-configuring MANETs. The ROAM framework is designed to be scalable to new optimisers and MANET protocols and requires no modifications of protocol functionality. Four original contributions are proposed: (1) ROAM, a middleware entity abstracts information from the protocol stack using application programming interfaces (APIs) and that implements optimisers to monitor and autonomously tune conditions at protocol layers in response to dynamic network conditions. The cross-layer approach is MANET protocol generic, using minimal imposition on the protocol stack, without protocol modification requirements. (2) A horizontal handoff optimiser that responds to time-varying link quality to ensure optimal and most robust channel usage. (3) A distributed contention reduction optimiser that reduces channel contention and related delay, in response to detection of the presence of a hidden transmitter. (4) A feasibility evaluation of the ROAM architecture to bound maximum delay and jitter in a comprehensive range of ns2-MIRACLE simulation scenarios that demonstrate independence from the key causes of network dynamics: application setting and MANET configuration; including mobility or topology. Experimental results show that ROAM can constrain end-to-end delay, jitter and packet loss, to support real-time applications with critical timing requirements

    A Quality of Service framework for upstream traffic in LTE across an XG-PON backhaul

    Get PDF
    Passive Optical Networks (PON) are promising as a transport network technology due to the high network capacity, long reach and strong QoS support in the latest PON standards. Long Term Evolution (LTE) is a popular wireless technology for its large data rates in the last mile. The natural integration of LTE and XG-PON, which is one of the latest standards of PON, presents several challenges for XG-PON to satisfy the backhaul QoS requirements of aggregated upstream LTE applications. This thesis proves that a dedicated XG-PON-based backhaul is capable of ensuring the QoS treatment required by different upstream application types in LTE, by means of standard-compliant Dynamic Bandwidth Allocation (DBA) mechanisms. First the design and evaluation of a standard-compliant, robust and fast XG-PON simulation module developed for the state-of-the-art ns-3 network simulator is presented in the thesis. This XG-PON simulation module forms a trustworthy and large-scale simulation platform for the evaluations in the rest of the thesis, and has been released for use by the scientific community. The design and implementation details of the XGIANT DBA, which provides standard complaint QoS treatment in an isolated XG-PON network, are then presented in the thesis along with comparative evaluations with the recently-published EBU DBA. The evaluations explored the ability of both XGIANT and EBU in terms of queuing-delay and throughput assurances for different classes of simplified (deterministic) traffic models, for a range of upstream loading in XG-PON. The evaluation of XGIANT and EBU DBAs are then presented for the context of a dedicated XG-PON backhaul in LTE with regard to the influence of standard-compliant and QoS-aware DBAs on the performance of large-scale, UDP-based applications. These evaluations disqualify both XGIANT and EBU DBAs in providing prioritised queuing delay performances for three upstream application types (conversational voice, peer-to-peer video and best-effort Internet) in LTE; the evaluations also indicate the need to have more dynamic and efficient QoS policies, along with an improved fairness policy in a DBA used in the dedicated XG-PON backhaul to ensure the QoS requirements of the upstream LTE applications in the backhaul. Finally, the design and implementation details of two standard-compliant DBAs, namely Deficit XGIANT (XGIANT-D) and Proportional XGIANT (XGIANT-P), which provide the required QoS treatment in the dedicated XG-PON backhaul for all three application types in the LTE upstream are presented in the thesis. Evaluations of the XGIANT-D and XGIANT-P DBAs presented in the thesis prove the ability of the fine-tuned QoS and fairness policies in the DBAs in ensuring prioritised and fair queuing-delay and throughput efficiency for UDP- and TCP-based applications, generated and aggregated based on realistic conditions in the LTE upstream
    • …
    corecore