41 research outputs found

    Survivable Virtual Network Embedding in Transport Networks

    Get PDF
    Network Virtualization (NV) is perceived as an enabling technology for the future Internet and the 5th Generation (5G) of mobile networks. It is becoming increasingly difficult to keep up with emerging applications’ Quality of Service (QoS) requirements in an ossified Internet. NV addresses the current Internet’s ossification problem by allowing the co-existence of multiple Virtual Networks (VNs), each customized to a specific purpose on the shared Internet. NV also facilitates a new business model, namely, Network-as-a-Service (NaaS), which provides a separation between applications and services, and the networks supporting them. 5G mobile network operators have adopted the NaaS model to partition their physical network resources into multiple VNs (also called network slices) and lease them to service providers. Service providers use the leased VNs to offer customized services satisfying specific QoS requirements without any investment in deploying and managing a physical network infrastructure. The benefits of NV come at additional resource management challenges. A fundamental problem in NV is to efficiently map the virtual nodes and virtual links of a VN to physical nodes and paths, respectively, known as the Virtual Network Embedding (VNE) problem. A VNE that can survive physical resource failures is known as the survivable VNE (SVNE) problem, and has received significant attention recently. In this thesis, we address variants of the SVNE problem with different bandwidth and reliability requirements for transport networks. Specifically, the thesis includes four main contributions. First, a connectivity-aware VNE approach that ensures VN connectivity without bandwidth guarantee in the face of multiple link failures. Second, a joint spare capacity allocation and VNE scheme that provides bandwidth guarantee against link failures by augmenting VNs with necessary spare capacity. Third, a generalized recovery mechanism to re-embed the VNs that are impacted by a physical node failure. Fourth, a reliable VNE scheme with dedicated protection that allows tuning of available bandwidth of a VN during a physical link failure. We show the effectiveness of the proposed SVNE schemes through extensive simulations. We believe that the thesis can set the stage for further research specially in the area of automated failure management for next generation networks

    Survivability schemes for metro ethernet networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Scalable Column Generation Models and Algorithms for Optical Network Planning Problems

    Get PDF
    Column Generation Method has been proved to be a powerful tool to model and solve large scale optimization problems in various practical domains such as operation management, logistics and computer design. Such a decomposition approach has been also applied in telecommunication for several classes of classical network design and planning problems with a great success. In this thesis, we confirm that Column Generation Methodology is also a powerful tool in solving several contemporary network design problems that come from a rising worldwide demand of heavy traffic (100Gbps, 400Gbps, and 1Tbps) with emphasis on cost-effective and resilient networks. Such problems are very challenging in terms of complexity as well as solution quality. Research in this thesis attacks four challenging design problems in optical networks: design of p-cycles subject to wavelength continuity, design of dependent and independent p-cycles against multiple failures, design of survivable virtual topologies against multiple failures, design of a multirate optical network architecture. For each design problem, we develop a new mathematical models based on Column Generation Decomposition scheme. Numerical results show that Column Generation methodology is the right choice to deal with hard network design problems since it allows us to efficiently solve large scale network instances which have been puzzles for the current state of art. Additionally, the thesis reveals the great flexibility of Column Generation in formulating design problems that have quite different natures as well as requirements. Obtained results in this thesis show that, firstly, the design of p-cycles should be under a wavelength continuity assumption in order to save the converter cost since the difference between the capacity requirement under wavelength conversion vs. under wavelength continuity is insignificant. Secondly, such results which come from our new general design model for failure dependent p-cycles prove the fact that failure dependent p-cycles save significantly spare capacity than failure independent p-cycles. Thirdly, large instances can be quasi-optimally solved in case of survivable topology designs thanks to our new path-formulation model with online generation of augmenting paths. Lastly, the importance of high capacity devices such as 100Gbps transceiver and the impact of the restriction on number of regeneration sites to the provisioning cost of multirate WDM networks are revealed through our new hierarchical Column Generation model

    Network coding-based survivability techniques for multi-hop wireless networks

    Get PDF
    Multi-hop Wireless Networks (MWN) have drawn a lot of attention in the last decade, and will continue to be a hot and active research area in the future also. MWNs are attractive because they require much less effort to install and operate (compared to wired networks), and provide the network users with the flexibility and convenience they need. However, with these advantages comes a lot of challenges. In this work, we focus on one important challenge, namely, network survivability or the network ability to sustain failures and recover from service interruption in a timely manner. Survivability mechanisms can be divided into two main categories; Protection and restoration mechanisms. Protection is usually favored over restoration because it usually provides faster recovery. However, the problem with traditional protection schemes is that they are very demanding and consume a lot of network resources. Actually, at least 50% of the used resources in a communication session are wasted in order to provide the destination with redundant information, which can be made use of only when a network failure or information loss occurs. To overcome this problem and to make protection more feasible, we need to reduce the used network resources to provide proactive protection without compromising the recovery speed. To achieve this goal, we propose to use network coding. Basically, network coding allows intermediate network nodes to combine data packets instead of just forwarding them as is, which leads to minimizing the consumed network resources used for protection purposes. In this work we give special attention to the survivability of many-to-one wireless flows, where a set of N sources are sending data units to a common destination T. Examples of such many-to-one flows are found in Wireless Mesh Networks (WMNs) or Wireless Sensor Networks (WSNs). We present two techniques to provide proactive protection to the information flow in such communication networks. First, we present a centralized approach, for which we derive and prove the sufficient and necessary conditions that allows us to protect the many-to-one information flow against a single link failure using only one additional path. We provide a detailed study of this technique, which covers extensions for more general cases, complexity analysis that proves the NP-completeness of the problem for networks with limited min-cuts, and finally performance evaluation which shows that in the worst case our coding-based protection scheme can reduce the useful information rate by 50% (i.e., will be equivalent to traditional protection schemes). Next, we study the implementation of the previous approach when all network nodes have single transceivers. In this part of our work we first present a greedy scheduling algorithm for the sources transmissions based on digital network coding, and then we show how analog network coding can further enhance the performance of the scheduling algorithm. Our second protection scheme uses deterministic binary network coding in a distributed manner to enhance the resiliency of the Sensors-to-Base information flow against packet loss. We study the coding efficiency issue and introduce the idea of relative indexing to reduce the coding coefficients overhead. Moreover, we show through a simulation study that our approach is highly scalable and performs better as the network size and/or number of sources increases. The final part of this work deals with unicast communication sessions, where a single source node S is transmitting data to a single destination node T through multiple hops. We present a different way to handle the survivability vs. bandwidth tradeoff, where we show how to enhance the survivability of the S-T information flow without reducing the maximum achievable S-T information rate. The basic idea is not to protect the bottleneck links in the network, but to try to protect all other links if possible. We divide this problem into two problems: 1) pre-cut protection, which we prove it to be NP-hard, and thus, we present an ILP and a heuristic approach to solve it, and 2) post-cut protection, where we prove that all the data units that are not delivered to T directly after the min-cut can be protected against a single link failure. Using network coding in this problem allows us to maximize the number of protected data units before and after the min-cut

    Finding and Mitigating Geographic Vulnerabilities in Mission Critical Multi-Layer Networks

    Get PDF
    Title from PDF of title page, viewed on June 20, 2016Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 232-257)Thesis(Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2016In Air Traffic Control (ATC), communications outages may lead to immediate loss of communications or radar contact with aircraft. In the short term, there may be safety related issues as important services including power systems, ATC, or communications for first responders during a disaster may be out of service. Significant financial damage from airline delays and cancellations may occur in the long term. This highlights the different types of impact that may occur after a disaster or other geographic event. The question is How do we evaluate and improve the ability of a mission-critical network to perform its mission during geographically correlated failures? To answer this question, we consider several large and small networks, including a multi-layer ATC Service Oriented Architecture (SOA) network known as SWIM. This research presents a number of tools to analyze and mitigate both long and short term geographic vulnerabilities in mission critical networks. To provide context for the tools, a disaster planning approach is presented that focuses on Resiliency Evaluation, Provisioning Demands, Topology Design, and Mitigation of Vulnerabilities. In the Resilience Evaluation, we propose a novel metric known as the Network Impact Resilience (NIR) metric and a reduced state based algorithm to compute the NIR known as the Self-Pruning Network State Generation (SP-NSG) algorithm. These tools not only evaluate the resiliency of a network with a variety of possible network tests, but they also identify geographic vulnerabilities. Related to the Demand Provisioning and Mitigation of Vulnerabilities, we present methods that focus on provisioning in preparation for rerouting of demands immediately following an event based on Service Level Agreements (SLA) and fast rerouting of demands around geographic vulnerabilities using Multi-Topology Routing (MTR). The Topology Design area focuses on adding nodes to improve topologies to be more resistant to geographic vulnerabilities. Additionally, a set of network performance tools are proposed for use with mission critical networks that can model at least up to 2nd order network delay statistics. The first is an extension of the Queueing Network Analyzer (QNA) to model multi-layer networks (and specifically SOA networks). The second is a network decomposition tool based on Linear Algebraic Queueing Theory (LAQT). This is one of the first extensive uses of LAQT for network modeling. Benefits, results, and limitations of both methods are described.Introduction -- SWIM Network - Air traffic Control example -- Performance analysis of mission critical multi-layer networks -- Evaluation of geographically correlated failures in multi-layer networks -- Provisioning and restoral of mission critical services for disaster resilience -- Topology improvements to avoid high impact geographic events -- Routing of mission critical services during disasters -- Conclusions and future research -- Appendix A. Pub/Sub simulation model description -- Appendix B. ME Random Number Generatio

    Groupage et protection du trafic dynamique dans les réseaux WDM

    Full text link
    Avec les nouvelles technologies des réseaux optiques, une quantité de données de plus en plus grande peut être transportée par une seule longueur d'onde. Cette quantité peut atteindre jusqu’à 40 gigabits par seconde (Gbps). Les flots de données individuels quant à eux demandent beaucoup moins de bande passante. Le groupage de trafic est une technique qui permet l'utilisation efficace de la bande passante offerte par une longueur d'onde. Elle consiste à assembler plusieurs flots de données de bas débit en une seule entité de données qui peut être transporté sur une longueur d'onde. La technique demultiplexage en longueurs d'onde (Wavelength Division Multiplexing WDM) permet de transporter plusieurs longueurs d'onde sur une même fibre. L'utilisation des deux techniques : WDM et groupage de trafic, permet de transporter une quantité de données de l'ordre de terabits par seconde (Tbps) sur une même fibre optique. La protection du trafic dans les réseaux optiques devient alors une opération très vitale pour ces réseaux, puisqu'une seule panne peut perturber des milliers d'utilisateurs et engendre des pertes importantes jusqu'à plusieurs millions de dollars à l'opérateur et aux utilisateurs du réseau. La technique de protection consiste à réserver une capacité supplémentaire pour acheminer le trafic en cas de panne dans le réseau. Cette thèse porte sur l'étude des techniques de groupage et de protection du trafic en utilisant les p-cycles dans les réseaux optiques dans un contexte de trafic dynamique. La majorité des travaux existants considère un trafic statique où l'état du réseau ainsi que le trafic sont donnés au début et ne changent pas. En plus, la majorité de ces travaux utilise des heuristiques ou des méthodes ayant de la difficulté à résoudre des instances de grande taille. Dans le contexte de trafic dynamique, deux difficultés majeures s'ajoutent aux problèmes étudiés, à cause du changement continuel du trafic dans le réseau. La première est due au fait que la solution proposée à la période précédente, même si elle est optimisée, n'est plus nécessairement optimisée ou optimale pour la période courante, une nouvelle optimisation de la solution au problème est alors nécessaire. La deuxième difficulté est due au fait que la résolution du problème pour une période donnée est différente de sa résolution pour la période initiale à cause des connexions en cours dans le réseau qui ne doivent pas être trop dérangées à chaque période de temps. L'étude faite sur la technique de groupage de trafic dans un contexte de trafic dynamique consiste à proposer différents scénarios pour composer avec ce type de trafic, avec comme objectif la maximisation de la bande passante des connexions acceptées à chaque période de temps. Des formulations mathématiques des différents scénarios considérés pour le problème de groupage sont proposées. Les travaux que nous avons réalisés sur le problème de la protection considèrent deux types de p-cycles, ceux protégeant les liens (p-cycles de base) et les FIPP p-cycles (p-cycles protégeant les chemins). Ces travaux ont consisté d’abord en la proposition de différents scénarios pour gérer les p-cycles de protection dans un contexte de trafic dynamique. Ensuite, une étude sur la stabilité des p-cycles dans un contexte de trafic dynamique a été faite. Des formulations de différents scénarios ont été proposées et les méthodes de résolution utilisées permettent d’aborder des problèmes de plus grande taille que ceux présentés dans la littérature. Nous nous appuyons sur la méthode de génération de colonnes pour énumérer implicitement les cycles les plus prometteurs. Dans l'étude des p-cycles protégeant les chemins ou FIPP p-cycles, nous avons proposé des formulations pour le problème maître et le problème auxiliaire. Nous avons utilisé une méthode de décomposition hiérarchique du problème qui nous permet d'obtenir de meilleurs résultats dans un temps raisonnable. Comme pour les p-cycles de base, nous avons étudié la stabilité des FIPP p-cycles dans un contexte de trafic dynamique. Les travaux montrent que dépendamment du critère d'optimisation, les p-cycles de base (protégeant les liens) et les FIPP p-cycles (protégeant les chemins) peuvent être très stables.With new technologies in optical networking, an increasing quantity of data can be carried by a single wavelength. This amount of data can reach up to 40 gigabits per second (Gbps). Meanwhile, the individual data flows require much less bandwidth. The traffic grooming is a technique that allows the efficient use of the bandwidth offered by a wavelength. It consists of assembling several low-speed data streams into a single data entity that can be carried on a wavelength. The wavelength division multiplexing (WDM) technique allows carrying multiple wavelengths on a single fiber. The use of the two techniques,WDMand traffic grooming, allows carrying a quantity of data in the order of terabits per second (Tbps) over a single optical fiber. Thus, the traffic protection in optical networks becomes an operation very vital for these networks, since a single failure can disrupt thousands of users and may result in several millions of dollars of lost revenue to the operator and the network users. The survivability techniques involve reserving additional capacity to carry traffic in case of a failure in the network. This thesis concerns the study of the techniques of grooming and protection of traffic using p-cycles in optical networks in a context of dynamic traffic. Most existing work considers a static traffic where the network status and the traffic are given at the beginning and do not change. In addition, most of these works concerns heuristic algorithms or methods suffering from critical lack of scalability. In the context of dynamic traffic, two major difficulties are added to the studied problems, because of the continuous change in network traffic. The first is due to the fact that the solution proposed in the previous period, even if optimal, does not necessarily remain optimal in the current period. Thus, a re-optimization of the solution to the problem is required. The second difficulty is due to the fact that the solution of the problem for a given period is different from its solution for the initial period because of the ongoing connections in the network that should not be too disturbed at each time period. The study done on the traffic grooming technique in the context of dynamic traffic consists of proposing different scenarios for dealing with this type of traffic, with the objective of maximizing the bandwidth of the new granted connections at each time period. Mathematical formulations of the different considered scenarios for the grooming problem are proposed. The work we have done on the problem of protection considers two types of p-cycles, those protecting links and FIPP p-cycles (p-cycle protecting paths). This work consisted primarily on the proposition of different scenarios for managing protection p-cycles in a context of dynamic traffic. Then, a study on the stability of cycles in the context of dynamic traffic was done. Formulations of different scenarios have been proposed and the proposed solution methods allow the approach of larger problem instances than those reported in the literature. We rely on the method of column generation to implicitly enumerate promising cycles. In the study of path protecting p-cycles or FIPP p-cycles, we proposed mathematical formulations for the master and the pricing problems. We used a hierarchical decomposition of the problem which allows us to obtain better results in a reasonable time. As for the basic p-cycles, we studied the stability of FIPP p-cycles in the context of dynamic traffic. The work shows that depending on the optimization criterion, the basic p-cycles (protecting the links) and FIPP p-cycles (protecting paths) can be very stable

    Resource Allocation, and Survivability in Network Virtualization Environments

    Get PDF
    Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this thesis, we remove that assumption by formulating the survivable virtual network embedding (SVNE) problem and developing baseline policy heuristics and an efficient hybrid policy heuristic to solve it. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristic for SVNE outperforms baseline heuristics in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time

    Secure and dependable virtual network embedding

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores) Universidade de Lisboa, Faculdade de Ciências, 2016A virtualização de redes tornou-se uma técnica poderosa que permite que várias redes virtuais, criadas por diferentes utilizadores, operem numa infraestrutura partilhada. Com o avanço de tecnologias como Redes Definidas por Software1, a virtualização de redes ganhou um novo ímpeto e tornou-se uma funcionalidade central em ambientes de computação em nuvem. Um dos grandes desafios que a virtualização de redes apresenta é como utilizar de forma eficiente os recursos oferecidos pelas redes físicas dos fornecedores de infraestruturas, nomeadamente os nós - entidades de uma rede com capacidade computacional – e ligações – entidades de uma rede que transportam dados entre pares de nós. De forma a resolver este problema, vários trabalhos da área de virtualização de redes têm sido desenvolvidos [1]. Em particular, têm sido propostos algoritmos que encontram formas eficazes para decidir onde mapear os nós e as ligações virtuais na rede física. Estes algoritmos podem assumir uma de três aproximações diferentes: soluções exatas, que resolvem pequenas instâncias do problema e encontram soluções ótimas para a localização dos recursos virtuais na rede física; soluções baseadas em heurísticas, que se focam em obter um bom resultado, próximo do ótimo, em pouco tempo; e meta-heurísticas, que usam técnicas específicas independentes do problema para achar um resultado próximo do ótimo. Tipicamente o objetivo destes algoritmos é achar estes mapeamentos tendo em conta determinadas métricas, como qualidade de serviço, custos económicos ou confiabilidade. Neste contexto, uma das métricas menos exploradas é a garantia da segurança das redes virtuais, um tema que é cada vez mais importante, especialmente em ambientes de computação em nuvem. As plataformas de virtualização propostas recentemente dão aos utilizadores a liberdade para especificarem de forma arbitrária as topologias virtuais para as suas redes e esquemas de endereçamento. Estas plataformas têm sido desenvolvidas considerando apenas um provedor de nuvem, forçando os clientes a confiarem que este provedor mantém os seus dados e cargas de trabalho seguros e disponíveis. Infelizmente, existem evidências de que problemas nestes ambientes ocorrem, tanto de natureza maliciosa (ataques causados através de algum elemento corrompido na rede) como benigna (falhas em elementos individuais da rede, ou falhas causadas, por exemplo, por catástrofes, afetando vários elementos da rede em simultâneo) [2]. Deste modo, nesta tese defendemos que a segurança e a confiabilidade são dois fatores críticos e, por isso, devem ser considerados durante o processo de mapeamento das redes virtuais. Nesse sentido, neste trabalho definimos um problema denominado Mapeamento de Redes Virtuais Seguro e Confiável, e construímos um algoritmo que resolve este problema num ambiente constituído por várias nuvens (i.e., múltiplos provedores de recursos físicos). Ao considerar-se um ambiente como este, evita-se que o cliente fique restringido a apenas um provedor, aumentando a possibilidade de a sua rede e o seu serviço resistirem a falhas em elementos da rede física ou interrupções numa nuvem, através da replicação dos serviços por diversas nuvens. A segurança das redes virtuais também é melhorada na medida em que os serviços mais sensíveis podem ser colocados em nuvens que oferecem maiores garantias de segurança. O problema em si tem como principal objetivo mapear redes virtuais sobre a rede física, distribuída potencialmente por diferentes nuvens, utilizando a menor quantidade de recursos, e satisfazendo, ao mesmo tempo, os seguintes requisitos: (i) cada nó e ligação virtual é mapeado na rede física satisfazendo os requisitos de capacidade de computação e de largura de banda, respetivamente, e também os requisitos de segurança e confiabilidade associados; (ii) cada nó virtual ´e mapeado num nó físico cuja localização satisfaz os requisitos do primeiro (isto é, se por exemplo um nó virtual procura uma nuvem que forneça um nível de máxima segurança, o nó físico que será alocado tem de pertencer a uma nuvem com essa característica); (iii) a rede virtual está protegida contra erros na rede física ou disrupção numa nuvem, de modo a cumprir os requisitos de confiabilidade. O algoritmo que apresentamos nesta tese cobre todos os requisitos deste problema, juntando, pela primeira vez, as propriedades segurança e confiabilidade. Adicionalmente, esta solução considera um ambiente de múltiplos domínios (neste caso, múltiplas nuvens), de maneira a eliminar eventuais limitações que surgem quando se usa um único provedor de nuvem. A solução criada é uma solução exata, desenvolvida através de uma técnica de otimização de programação inteira mista, e tem como objetivo minimizar os custos de mapeamento de redes virtuais, cobrindo sempre os seus requisitos de segurança e confiabilidade. Nesta solução são definidas diversas restrições que têm de ser cumpridas para que uma rede virtual possa ser mapeada sobre uma rede física. O nosso algoritmo oferece vários níveis de segurança e confiabilidade que podem ser escolhidos na definição das redes virtuais, nomeadamente associados aos nós e às ligações que as compõem. O cliente pode escolher arbitrariamente que níveis deseja para cada recurso virtual, para além de poder especificar também a topologia da sua rede e os requisitos de capacidade de computação e largura de banda para os nós e ligações, respetivamente. Sumariamente, nesta tese consideramos que são suportados vários níveis de segurança para os nós e ligações virtuais, que vão desde segurança por omissão, isto é, garantias mínimas de segurança, até à inclusão de mecanismos criptográficos que garantem maior segurança. Em relação à confiabilidade, os clientes podem optar por adicionar redundância aos seus recursos virtuais de modo a tolerar falhas. Quando é requisitada redundância, os clientes podem escolher, para cada nó virtual, se desejam a respetiva reserva adicional na mesma nuvem onde se encontra o nó primário, tolerando apenas falhas locais, ou localizada noutra nuvem, com o intuito de aumentar a probabilidade de a sua rede virtual sobreviver a uma disrupção¸ ao de uma nuvem. Na nossa solução, as nuvens são também distinguidas entre si consoante o nível de confiança que fornecem ao cliente. Podem ser consideradas nuvens públicas (pertencentes a provedores), privadas (pertencentes aos próprios clientes), entre outras. A definição de diferentes tipos de nuvem dá a possibilidade ao cliente de escolher as nuvens consoante a sensibilidade da sua informação. Nesta tese é ainda apresentada uma interface de programação de aplicações, que fornece como funcionalidade o mapeamento de redes virtuais segura e confiável, e que pode ser utilizada por plataformas de virtualização que tenham em conta ambientes de múltiplos domínios [3]. Quanto aos resultados, quando segurança e confiabilidade são requisitadas pelas redes virtuais, os mesmos mostram que existe um custo adicional (já esperado) para fornecer estas propriedades. No entanto, um ligeiro ajuste no preço dos recursos permite aos fornecedores de infraestruturas que fornecem segurança e confiabilidade obter um lucro semelhante (ou superior) ao dos fornecedores que não fornecem este tipo de propriedades. Os resultados mostram ainda que o nosso algoritmo se comporta de maneira similar ao algoritmo mais utilizado para mapeamento de redes virtuais, D-ViNE [4, 5], quando os requisitos de segurança e confiabilidade não são considerados. Apesar de serem uma boa base para novos trabalhos na área, as soluções exatas Não escalam (este tipo de soluções apenas consegue resolver problemas num tempo razoável se estes forem de pequena escala). Deste modo, como trabalho futuro, o primeiro caminho a tomar será o desenvolvimento de uma heurística que garanta as propriedades de segurança e confiabilidade.Network virtualization is emerging as a powerful technique to allow multiple virtual networks (VN), eventually specified by different tenants, to run on a shared infrastructure. With the recent advances on Software Defined Networks (SDN), network virtualization – traditionally limited to Virtual Local Area Networks (VLAN) – has gained new traction. A major challenge in network virtualization is how to make efficient use of the shared resources. Virtual network embedding (VNE) addresses this problem by finding an effective mapping of the virtual nodes and links onto the substrate network (SN). VNE has been studied in the network virtualization literature, with several different algorithms having been proposed to solve the problem. Typically, these algorithms address various requirements, such as quality of service (QoS), economic costs or dependability. A mostly unexplored perspective on this problem is providing security assurances, a gap increasingly more relevant to organizations, as they move their critical services to the cloud. Recently proposed virtualization platforms give tenants the freedom to specify their network topologies and addressing schemes. These platforms have been targeting only a datacenter of a single cloud provider, forcing complete trust on the provider to run the workloads correctly and limiting dependability. Unfortunately, there is increasing evidence that problems do occur at a cloud scale, of both malicious and benign natures. Thus, in this thesis we argue that security and dependability is becoming a critical factor that should be considered by VNE algorithms. Motivated by this, we define the secure and dependable VNE problem, and design an algorithm that addresses this problem in multiple cloud environments. By not relying on a single cloud we avoid internet-scale single points of failures, ensuring the recovery from cloud outages by replicating workloads across providers. Our solution can also enhance security by leaving sensitive workloads in more secure clouds: for instance, in private clouds under control of the user or in facilities that employ the required security features. The results from our experiments show that there is a cost in providing security and availability that may reduce the provider profit. However, a relatively small increase in the price of the richer features of our solution (e.g., security resources) enables the provider to offer secure and dependable network services at a profit. Our experiments also show that our algorithm behaves similarly to the most commonly used VNE algorithm when security and dependability are not requested by VNs

    Risk-based Survivable Network Design

    Get PDF
    Communication networks are part of the critical infrastructure upon which society and the economy depends; therefore it is crucial for communication networks to survive failures and physical attacks to provide critical services. Survivability techniques are deployed to ensure the functionality of communication networks in the face of failures. The basic approach for designing survivable networks is that given a survivability technique (e.g., link protection, or path protection) the network is designed to survive a set of predefined failures (e.g., all single-link failures) with minimum cost. However, a hidden assumption in this design approach is that the sufficient monetary funds are available to protect all predefined failures, which might not be the case in practice as network operators may have a limited budget for improving network survivability. To overcome this limitation, this dissertation proposed a new approach for designing survivable networks, namely; risk-based survivable network design, which integrates risk analysis techniques into an incremental network design procedure with budget constraints. In the risk-based design approach, the basic design problem considered is that given a working network and a fixed budget, how best to allocate the budget for deploying a survivability technique in different parts of the network based on the risk. The term risk measures two related quantities: the likelihood of failure or attack, and the amount of damage caused by the failure or attack. Various designs with different risk-based design objectives are considered, for example, minimizing the expected damage, minimizing the maximum damage, and minimizing a measure of the variability of damage that could occur in the network. In this dissertation, a design methodology for the proposed risk-based survivable network design approach is presented. The design problems are formulated as Integer Programming (InP) models; and in order to scale the solution of models, some greedy heuristic solution algorithms are developed. Numerical results and analysis illustrating different risk-based designs are presented

    Fault Localization in All-Optical Mesh Networks

    Get PDF
    Fault management is a challenging task in all-optical wavelength division multiplexing (WDM) networks. However, fast fault localization for shared risk link groups (SRLGs) with multiple links is essential for building a fully survival and functional transparent all-optical mesh network. Monitoring trail (m-trail) technology is an effective approach to achieve the goal, whereby a set of m-trails are derived for unambiguous fault localization (UFL). However, an m-trail traverses through a link by utilizing a dedicated wavelength channel (WL), causing a significant amount of resource consumption. In addition, existing m-trail methods incur long and variable alarm dissemination delay. We introduce a novel framework of real-time fault localization in all-optical WDM mesh networks, called the monitoring-burst (m-burst), which aims at initiating a balanced trade-off between consumed monitoring resources and fault localization latency. The m-burst framework has a single monitoring node (MN) and requires one WL in each unidirectional link if the link is traversed by any m-trail. The MN launches short duration optical bursts periodically along each m-trail to probe the links of the m-trail. Bursts along different m-trails are kept non-overlapping through each unidirectional link by scheduling burst launching times from the MN and multiplexing multiple bursts, if any, traversing the link. Thus, the MN can unambiguously localize the failed links by identifying the lost bursts without incurring any alarm dissemination delay. We have proposed several novel m-trail allocation, burst launching time scheduling, and node switch fabric configuration schemes. Numerical results show that the schemes, when deployed in the m-burst framework, are able to localize single-link and multi-link SRLG faults unambiguously, with reasonable fault localization latency, by using at most one WL in each unidirectional link. To reduce the fault localization latency further, we also introduce a novel methodology called nested m-trails. At first, mesh networks are decomposed into cycles and trails. Each cycle (trail) is realized as an independent virtual ring (linear) network using a separate pair of WLs (one WL in each direction) in each undirected link traversed by the cycle (trail). Then, sets of m-trails, i.e., nested m-trails, derived in each virtual network are deployed independently in the m-burst framework for ring (linear) networks. As a result, the fault localization latency is reduced significantly. Moreover, the application of nested m-trails in adaptive probing also reduces the number of sequential probes significantly. Therefore, practical deployment of adaptive probing is now possible. However, the WL consumption of the nested m-trail technique is not limited by one WL per unidirectional link. Thus, further investigation is needed to reduce the WL consumption of the technique.1 yea
    corecore