29 research outputs found

    Topological Design of Multiple Virtual Private Networks UTILIZING SINK-TREE PATHS

    Get PDF
    With the deployment of MultiProtocol Label Switching (MPLS) over a core backbone networks, it is possible for a service provider to built Virtual Private Networks (VPNs) supporting various classes of services with QoS guarantees. Efficiently mapping the logical layout of multiple VPNs over a service provider network is a challenging traffic engineering problem. The use of sink-tree (multipoint-to-point) routing paths in a MPLS network makes the VPN design problem different from traditional design approaches where a full-mesh of point-to-point paths is often the choice. The clear benefits of using sink-tree paths are the reduction in the number of label switch paths and bandwidth savings due to larger granularities of bandwidth aggregation within the network. In this thesis, the design of multiple VPNs over a MPLS-like infrastructure network, using sink-tree routing, is formulated as a mixed integer programming problem to simultaneously find a set of VPN logical topologies and their dimensions to carry multi-service, multi-hour traffic from various customers. Such a problem formulation yields a NP-hard complexity. A heuristic path selection algorithm is proposed here to scale the VPN design problem by choosing a small-but-good candidate set of feasible sink-tree paths over which the optimal routes and capacity assignments are determined. The proposed heuristic has clearly shown to speed up the optimization process and the solution can be obtained within a reasonable time for a realistic-size network. Nevertheless, when a large number of VPNs are being layout simultaneously, a standard optimization approach has a limited scalability. Here, the heuristics termed the Minimum-Capacity Sink-Tree Assignment (MCSTA) algorithm proposed to approximate the optimal bandwidth and sink-tree route assignment for multiple VPNs within a polynomial computational time. Numerical results demonstrate the MCSTA algorithm yields a good solution within a small error and sometimes yields the exact solution. Lastly, the proposed VPN design models and solution algorithms are extended for multipoint traffic demand including multipoint-to-point and broadcasting connections

    Bandwidth scheduling and its application in ATM networks

    Get PDF

    Improved learning automata applied to routing in multi-service networks

    Get PDF
    Multi-service communications networks are generally designed, provisioned and configured, based on source-destination user demands expected to occur over a recurring time period. However due to network users' actions being non-deterministic, actual user demands will vary from those expected, potentially causing some network resources to be under- provisioned, with others possibly over-provisioned. As actual user demands vary over the recurring time period from those expected, so the status of the various shared network resources may also vary. This high degree of uncertainty necessitates using adaptive resource allocation mechanisms to share the finite network resources more efficiently so that more of actual user demands may be accommodated onto the network. The overhead for these adaptive resource allocation mechanisms must be low in order to scale for use in large networks carrying many source-destination user demands. This thesis examines the use of stochastic learning automata for the adaptive routing problem (these being adaptive, distributed and simple in implementation and operation) and seeks to improve their weakness of slow convergence whilst maintaining their strength of subsequent near optimal performance. Firstly, current reinforcement algorithms (the part causing the automaton to learn) are examined for applicability, and contrary to the literature the discretised schemes are found in general to be unsuitable. Two algorithms are chosen (one with fast convergence, the other with good subsequent performance) and are improved through automatically adapting the learning rates and automatically switching between the two algorithms. Both novel methods use local entropy of action probabilities for determining convergence state. However when the convergence speed and blocking probability is compared to a bandwidth-based dynamic link-state shortest-path algorithm, the latter is found to be superior. A novel re-application of learning automata to the routing problem is therefore proposed: using link utilisation levels instead of call acceptance or packet delay. Learning automata now return a lower blocking probability than the dynamic shortest-path based scheme under realistic loading levels, but still suffer from a significant number of convergence iterations. Therefore the final improvement is to combine both learning automata and shortest-path concepts to form a hybrid algorithm. The resulting blocking probability of this novel routing algorithm is superior to either algorithm, even when using trend user demands

    Renegotiable VBR service

    Get PDF
    In this work we address the problem of supporting the QoS requirements for applications while efficiently allocating the network resources. We analyse this problem at the source node where the traffic profile is negotiated with the network and the traffic is shaped according to the contract. We advocate VBR renegotiation as an efficient mechanism to accommodate traffic fluctuations over the burst time-scale. This is in line with the Integrated Service of the IETF with the Resource reSerVation Protocol (RSVP), where the negotiated contract may be modified periodically. In this thesis, we analyse the fundamental elements needed for solving the VBR renegotiation. A source periodically estimates the needs based on: (1) its future traffic, (2) cost objective, (3) information from the past. The issues of this estimation are twofold: future traffic prediction given a prediction, the optimal change. In the case of a CBR specification the optimisation problem is trivial. But with a VBR specification this problem is complex because of the multidimensionality of the VBR traffic descriptor and the non zero condition of the system at the times where the parameter set is changed. We, therefore, focus on the problem of finding the optimal change for sources with pre-recorded or classified traffic. The prediction of the future traffic is out of the scope of this thesis. Traditional existing models are not suitable for modelling this dynamic situation because they do not take into account the non-zero conditions at the transient moments. To address the shortfalls of the traditional approaches, a new class of shapers, the time varying leaky bucket shaper class, has been introduced and characterised by network calculus. To our knowledge, this is the first model that takes into account non-zero conditions at the transient time. This innovative result forms the basis of Renegotiable VBR Service (RVBR). The application of our RVBR mathematical model to the initial problem of supporting the applications' QoS requirements while efficiently allocating the network resources results in simple, efficient algorithms. Through simulation, we first compare RVBR service versus VBR service and versus renegotiable CBR service. We show that RVBR service provides significant advantages in terms of resource costs and resource utilisation. Then, we illustrate that when the service assumes zero conditions at the transient time, the source could potentially experience losses in the case of policing because of the mismatch between the assumed bucket and buffer level and the policed bucket and buffer level. As an example of RVBR service usage, we describe the simulation of RVBR service in a scenario where a sender transmits a MPEG2 video over a network using RSVP reservation protocol with Controlled-Load service. We also describe the implementation design of a Video on Demand application, which is the first example of an RVBR-enabled application. The simulation and experimentation results lead us to believe that RVBR service provides an adequate service (in terms of QoS guaranteed and of efficient resource allocation) to sources with pre-recorded or classified traffic

    Integração do paradigma de cloud computing com a infraestrutura de rede do operador

    Get PDF
    Doutoramento em Engenharia InformáticaThe proliferation of Internet access allows that users have the possibility to use services available directly through the Internet, which translates in a change of the paradigm of using applications and in the way of communicating, popularizing in this way the so-called cloud computing paradigm. Cloud computing brings with it requirements at two different levels: at the cloud level, usually relying in centralized data centers, where information technology and network resources must be able to guarantee the demand of such services; and at the access level, i.e., depending on the service being consumed, different quality of service is required in the access network, which is a Network Operator (NO) domain. In summary, there is an obvious network dependency. However, the network has been playing a relatively minor role, mostly as a provider of (best-effort) connectivity within the cloud and in the access network. The work developed in this Thesis enables for the effective integration of cloud and NO domains, allowing the required network support for cloud. We propose a framework and a set of associated mechanisms for the integrated management and control of cloud computing and NO domains to provide endto- end services. Moreover, we elaborate a thorough study on the embedding of virtual resources in this integrated environment. The study focuses on maximizing the host of virtual resources on the physical infrastructure through optimal embedding strategies (considering the initial allocation of resources as well as adaptations through time), while at the same time minimizing the costs associated to energy consumption, in single and multiple domains. Furthermore, we explore how the NO can take advantage of the integrated environment to host traditional network functions. In this sense, we study how virtual network Service Functions (SFs) should be modelled and managed in a cloud environment and enhance the framework accordingly. A thorough evaluation of the proposed solutions was performed in the scope of this Thesis, assessing their benefits. We implemented proof of concepts to prove the added value, feasibility and easy deployment characteristics of the proposed framework. Furthermore, the embedding strategies evaluation has been performed through simulation and Integer Linear Programming (ILP) solving tools, and it showed that it is possible to reduce the physical infrastructure energy consumption without jeopardizing the virtual resources acceptance. This fact can be further increased by allowing virtual resource adaptation through time. However, one should have in mind the costs associated to adaptation processes. The costs can be minimized, but the virtual resource acceptance can be also reduced. This tradeoff has also been subject of the work in this Thesis.A proliferação do acesso à Internet permite aos utilizadores usar serviços disponibilizados diretamente através da Internet, o que se traduz numa mudança de paradigma na forma de usar aplicações e na forma de comunicar, popularizando desta forma o conceito denominado de cloud computing. Cloud computing traz consigo requisitos a dois níveis: ao nível da própria cloud, geralmente dependente de centros de dados centralizados, onde as tecnologias de informação e recursos de rede têm que ser capazes de garantir as exigências destes serviços; e ao nível do acesso, ou seja, dependendo do serviço que esteja a ser consumido, são necessários diferentes níveis de qualidade de serviço na rede de acesso, um domínio do operador de rede. Em síntese, existe uma clara dependência da cloud na rede. No entanto, o papel que a rede tem vindo a desempenhar neste âmbito é reduzido, sendo principalmente um fornecedor de conectividade (best-effort) tanto no dominio da cloud como no da rede de acesso. O trabalho desenvolvido nesta Tese permite uma integração efetiva dos domínios de cloud e operador de rede, dando assim à cloud o efetivo suporte da rede. Para tal, apresentamos uma plataforma e um conjunto de mecanismos associados para gestão e controlo integrado de domínios cloud computing e operador de rede por forma a fornecer serviços fim-a-fim. Além disso, elaboramos um estudo aprofundado sobre o mapeamento de recursos virtuais neste ambiente integrado. O estudo centra-se na maximização da incorporação de recursos virtuais na infraestrutura física por meio de estratégias de mapeamento ótimas (considerando a alocação inicial de recursos, bem como adaptações ao longo do tempo), enquanto que se minimizam os custos associados ao consumo de energia. Este estudo é feito para cenários de apenas um domínio e para cenários com múltiplos domínios. Além disso, exploramos como o operador de rede pode aproveitar o referido ambiente integrado para suportar funções de rede tradicionais. Neste sentido, estudamos como as funções de rede virtualizadas devem ser modeladas e geridas num ambiente cloud e estendemos a plataforma de acordo com este conceito. No âmbito desta Tese foi feita uma avaliação extensa das soluções propostas, avaliando os seus benefícios. Implementámos provas de conceito por forma a demonstrar as mais-valias, viabilidade e fácil implantação das soluções propostas. Além disso, a avaliação das estratégias de mapeamento foi realizada através de ferramentas de simulação e de programação linear inteira, mostrando que é possível reduzir o consumo de energia da infraestrutura física, sem comprometer a aceitação de recursos virtuais. Este aspeto pode ser melhorado através da adaptação de recursos virtuais ao longo do tempo. No entanto, deve-se ter em mente os custos associados aos processos de adaptação. Os custos podem ser minimizados, mas isso implica uma redução na aceitação de recursos virtuais. Esta compensação foi também um tema abordado nesta Tese

    Design of traffic shaper / scheduler for packet switches and DiffServ networks : algorithms and architectures

    Get PDF
    The convergence of communications, information, commerce and computing are creating a significant demand and opportunity for multimedia and multi-class communication services. In such environments, controlling the network behavior and guaranteeing the user\u27s quality of service is required. A flexible hierarchical sorting architecture which can function either as a traffic shaper or a scheduler according to the requirement of the traffic load is presented to meet the requirement. The core structure can be implemented as a hierarchical traffic shaper which can support a large number of connections with a wide variety of rates and burstiness without the loss of the granularity in cells\u27 conforming departure time. The hierarchical traffic shaper can implement the exact sorting scheme with a substantial reduced memory size by using two stages of timing queues, and with substantial reduction in complexity, without introducing any sorting inaccuracy. By setting a suitable threshold to the length of the departure queue and using a lookahead algorithm, the core structure can be converted to a hierarchical rateadaptive scheduler. Based on the traffic load, it can work as an exact sorting traffic shaper or a Generic Cell Rate Algorithm (GCRA) scheduler. Such a rate-adaptive scheduler can reduce the Cell Transfer Delay and the Maximum Memory Occupancy greatly while keeping the fairness in the bandwidth assignment which is the inherent characteristic of GCRA. By introducing a best-effort queue to accommodate besteffort traffic, the hierarchical sorting architecture can be changed to a near workconserving scheduler. It assigns remaining bandwidth to the best-effort traffic so that it improves the utilization, of the outlink while it guarantees the quality of service requirements of those services which require quality of service guarantees. The inherent flexibility of the hierarchical sorting architecture combined with intelligent algorithms determines its multiple functions. Its implementation not only can manage buffer and bandwidth resources effectively, but also does not require no more than off-the-shelf hardware technology. The correlation of the extra shaping delay and the rate of the connections is revealed, and an improved fair traffic shaping algorithm, Departure Event Driven plus Completing Service Time Resorting algorithm, is presented. The proposed algorithm introduces a resorting process into Departure Event Driven Traffic Shaping Algorithm to resolve the contention of multiple cells which are all eligible for transmission in the traffic shaper. By using the resorting process based on each connection\u27s rate, better fairness and flexibility in the bandwidth assignment for connections with wide range of rates can be given. A Dual Level Leaky Bucket Traffic Shaper(DLLBTS) architecture is proposed to be implemented at the edge nodes of Differentiated Services Networks in order to facilitate the quality of service management process. The proposed architecture can guarantee not only the class-based Service Level Agreement, but also the fair resource sharing among flows belonging to the same class. A simplified DLLBTS architecture is also given, which can achieve the goals of DLLBTS while maintain a very low implementation complexity so that it can be implemented with the current VLSI technology. In summary, the shaping and scheduling algorithms in the high speed packet switches and DiffServ networks are studied, and the intelligent implementation schemes are proposed for them

    IP and ATM integration: A New paradigm in multi-service internetworking

    Get PDF
    ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise

    Quality of service support for multimedia applications in mobile ad hoc networks

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore