147 research outputs found

    Survivable Virtual Network Embedding in Transport Networks

    Get PDF
    Network Virtualization (NV) is perceived as an enabling technology for the future Internet and the 5th Generation (5G) of mobile networks. It is becoming increasingly difficult to keep up with emerging applications’ Quality of Service (QoS) requirements in an ossified Internet. NV addresses the current Internet’s ossification problem by allowing the co-existence of multiple Virtual Networks (VNs), each customized to a specific purpose on the shared Internet. NV also facilitates a new business model, namely, Network-as-a-Service (NaaS), which provides a separation between applications and services, and the networks supporting them. 5G mobile network operators have adopted the NaaS model to partition their physical network resources into multiple VNs (also called network slices) and lease them to service providers. Service providers use the leased VNs to offer customized services satisfying specific QoS requirements without any investment in deploying and managing a physical network infrastructure. The benefits of NV come at additional resource management challenges. A fundamental problem in NV is to efficiently map the virtual nodes and virtual links of a VN to physical nodes and paths, respectively, known as the Virtual Network Embedding (VNE) problem. A VNE that can survive physical resource failures is known as the survivable VNE (SVNE) problem, and has received significant attention recently. In this thesis, we address variants of the SVNE problem with different bandwidth and reliability requirements for transport networks. Specifically, the thesis includes four main contributions. First, a connectivity-aware VNE approach that ensures VN connectivity without bandwidth guarantee in the face of multiple link failures. Second, a joint spare capacity allocation and VNE scheme that provides bandwidth guarantee against link failures by augmenting VNs with necessary spare capacity. Third, a generalized recovery mechanism to re-embed the VNs that are impacted by a physical node failure. Fourth, a reliable VNE scheme with dedicated protection that allows tuning of available bandwidth of a VN during a physical link failure. We show the effectiveness of the proposed SVNE schemes through extensive simulations. We believe that the thesis can set the stage for further research specially in the area of automated failure management for next generation networks

    Resource Allocation, and Survivability in Network Virtualization Environments

    Get PDF
    Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this thesis, we remove that assumption by formulating the survivable virtual network embedding (SVNE) problem and developing baseline policy heuristics and an efficient hybrid policy heuristic to solve it. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristic for SVNE outperforms baseline heuristics in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time

    Mitigating hidden node problem in an IEEE 802.16 failure resilient multi-hop wireless backhaul

    Get PDF
    Backhaul networks are used to interconnect access points and further connect them to gateway nodes which are located in regional or metropolitan centres. Conventionally, these backhaul networks are established using metallic cables, optical fibres, microwave or satellite links. With the proliferation of wireless technologies, multi-hop wireless backhaul networks emerge as a potential cost effective and flexible solution to provide extended coverage to areas where the deployment of wired backhaul is difficult or cost-prohibitive, such as the difficult to access and sparsely populated remote areas, which have little or no existing wired infrastructure.Nevertheless, wireless backhaul networks are vulnerable to node or link failures. In order to ensure undisrupted traffic transmission even in the presence of failures, additional nodes and links are introduced to create alternative paths between each source and destination pair. Moreover, the deployment of such extra links and nodes requires careful planning to ensure that available network resources can be fully utilised, while still achieving the specified failure resilience with minimum infrastructure establishment cost.The majority of the current research efforts focus on improving the failure resilience of wired backhaul networks but little is carried out on the wireless counterparts. Most of the existing studies on improving the failure resilience of wireless backhaul networks concern energy-constrained networks such as the wireless sensor and ad hoc networks. Moreover, they tend to focus on maintaining the connectivity of the networks during failure, but neglecting the network performance. As such, it calls for a better approach to design a wireless backhaul network, which can meet the specified failure resilience requirement with minimum network cost, while achieving the specified quality of service (QoS).In this study, a failure resilient wireless backhaul topology, taking the form of a ladder network, is proposed to connect a remote community to a gateway node located in a regional or metropolitan centre. This topology is designed with the use of a minimum number of nodes. Also, it provides at least one backup path between each node pair. With the exception of a few failure scenarios, the proposed ladder network can sustain multiple simultaneous link or node failures. Furthermore, it allows traffic to traverse a minimum number of additional hops to arrive at the destination during failure conditions.WiMax wireless technology, based on the IEEE 802.16 standard, is applied to the proposed ladder network of different hop counts. This wireless technology can operate in either point-to-multipoint single-hop mode or multi-hop mesh mode. For the latter, coordinated distributed scheduling involving a three-way handshake procedure is used for resource allocation. Computer simulations are used to extensively evaluate the performance of the ladder network. It is shown that the three-way handshake suffers from severe hidden node problem, which restrains nodes from data transmission for long period of time. As a result, data packets accumulate in the buffer queue of the affected nodes and these packets will be dropped when the buffer overflows. This in turn results in the degradation of the network throughput and increase of average transmission delay.A new scheme called reverse notification (RN) is proposed to overcome the hidden node problem. With this new scheme, all the nodes will be informed of the minislots requested by their neighbours. This will prevent the nodes from making the same request and increase the chance for the nodes to obtain all their requested resources, and start transmitting data as soon as the handshake is completed. Computer simulations have verified that the use of this RN can significantly reduce the hidden terminal problem and thus increase network throughput, as well as reduce transmission delay.In addition, two new schemes, namely request-resend and dynamic minislot allocation, are proposed to further mitigate the hidden node problem as it deteriorates during failure. The request-resend scheme is proposed to solve the hidden node problem when the RN message failed to arrive in time at the destined node to prevent it from sending a conflicting request. On the other hand, the dynamic minislot allocation scheme is proposed to allocate minislots to a given node according to the amount of traffic that it is currently servicing. It is shown that these two schemes can greatly enhance the network performance under both normal and failure conditions.The performance of the ladder network can be further improved by equipping each node with two transceivers to allow them to transmit concurrently on two different frequency channels. Moreover, a two-channel two-transceiver channel assignment (TTDCA) algorithm is proposed to allocate minislots to the nodes. When operating with this algorithm, a node uses only one of its two transceivers to transmit control messages during control subframe and both transceivers to transmit data packets during data subframe. Also, the frequency channels of the nodes are pre-assigned to more effectively overcome the hidden node problem. It is shown that the use of the TTDCA algorithm, in conjunction with the request-resend and RN schemes, is able to double the maximum achievable throughput of the ladder network, when compared to the single channel case. Also, the throughput remains constant regardless of the hop counts.The TTDCA algorithm is further modified to make use of the second transceiver at each node to transmit control messages during control subframe. Such an approach is referred to as enhanced TTDCA (ETTDCA) algorithm. This algorithm is effective in reducing the duration needed to complete the three-way handshake without sacrificing network throughput. It is shown that the application of the ETTDCA algorithm in ladder networks of different hop counts has greatly reduced the transmission delay to a value which allows the proposed network to not only relay a large amount of data traffic but also delay-sensitive traffics. This suggests that the proposed ladder network is a cost effective solution, which can provide the necessary failure resilience and specified QoS, for delivering broadband multimedia services to the remote rural communities

    Optimization Approaches for Improving Mitigation and Response Operations in Disaster Management

    Get PDF
    Disasters are calamitous events that severely affect the life conditions of an entire community, being the disasters either nature-based (e.g., earthquake) or man-made (e.g., terroristic attack). Disaster-related issues are usually dealt with according to the Disaster Operations Management (DOM) framework, which is composed of four phases: mitigation and preparedness, which address pre-disaster issues, and response and recovery, which tackle problems arising after the occurrence of a disaster. The ultimate scope of this dissertation is to present novel optimization models and algorithms aimed at improving operations belonging to the mitigation and response phases of the DOM. On the mitigation side, this thesis focuses on the protection of Critical Information Infrastructures (CII), which are commonly deemed to include communication and information networks. The majority of all the other Critical Infrastructures (CI), such as electricity, fuel and water supply as well as transportation systems, are crucially dependent on CII. Therefore, problems associated with CII that disrupt the services they are able to provide (whether to a single end-user or to another CI) are of increasing interest. This dissertation reviews several issues emerging in the Critical Information Infrastructures Protection (CIIP), field such as: how to identify the most critical components of a communication network whose disruption would affect the overall system functioning; how to mitigate the consequences of such calamitous events through protection strategies; and how to design a system which is intrinsically able to hedge against disruptions. To this end, this thesis provides a description of the seminal optimization models that have been developed to address the aforementioned issues in the general field of Critical Infrastructures Protection (CIP). Models are grouped in three categories which address the aforementioned issues: survivability-oriented interdiction, resource allocation strategy, and survivable design models; existing models are reviewed and possible extensions are proposed. In fact, some models have already been developed for CII (i.e., survivability-interdiction and design models), while others have been adapted from the literature on other CI (i.e., resource allocation strategy models). The main gap emerging in the CII field is that CII protection has been quite overlooked which has led to review optimization models that have been developed for the protection of other CI. Hence, this dissertation contributes to the literature in the field by also providing a survey of the multi-level programs that have been developed for protecting supply chains, transportation systems (e.g., railway infrastructures), and utility networks (e.g., power and water supply systems), in order to adapt them for CII protection. Based on the review outcomes, this thesis proposes a novel linear bi-level program for CIIP to mitigate worst-case disruptions through protection investments entailing network design operations, namely the Critical Node Detection Problem with Fortification (CNDPF), which integrates network survivability assessment, resource allocation strategies and design operations. To the best of my knowledge, this is the first bi-level program developed for CIIP. The model is solved through a Super Valid Inequalities (SVI) decomposition approach and a Greedy Constructive and Local Search (GCLS) heuristic. Computational results are reported for real communication networks and for different levels of both disaster magnitude and protection resources. On the response side, this thesis identifies the current challenges in devising realistic and applicable optimization models in the shelter location and evacuation routing context and outlines a roadmap for future research in this topical area. A shelter is a facility where people belonging to a community hit by a disaster are provided with different kinds of services (e.g., medical assistance, food). The role of a shelter is fundamental for two categories of people: those who are unable to make arrangements to other safe places (e.g., family or friends are too far), and those who belong to special-needs populations (e.g., disabled, elderly). People move towards shelter sites, or alternative safe destinations, when they either face or are going to face perilous circumstances. The process of leaving their own houses to seek refuge in safe zones goes under the name of evacuation. Two main types of evacuation can be identified: self-evacuation (or car-based evacuation) where individuals move towards safe sites autonomously, without receiving any kind of assistance from the responder community, and supported evacuation where special-needs populations (e.g., disabled, elderly) require support from emergency services and public authorities to reach some shelter facilities. This dissertation aims at identifying the central issues that should be addressed in a comprehensive shelter location/evacuation routing model. This is achieved by a novel meta-analysis that entail: (1) analysing existing disaster management surveys, (2) reviewing optimization models tackling shelter location and evacuation routing operations, either separately or in an integrated manner, (3) performing a critical analysis of existing papers combining shelter location and evacuation routing, concurrently with the responses of their authors, and (4) comparing the findings of the analysis of the papers with the findings of the existing disaster management surveys. The thesis also provides a discussion on the emergent challenges of shelter location and evacuation routing in optimization such as the need for future optimization models to involve stakeholders, include evacuee as well as system behaviour, be application-oriented rather than theoretical or model-driven, and interdisciplinary and, eventually, outlines a roadmap for future research. Based on the identified challenges, this thesis presents a novel scenario-based mixed-integer program which integrates shelter location, self-evacuation and supported-evacuation decisions, namely the Scenario-Indexed Shelter Location and Evacuation Routing (SISLER) problem. To the best of my knowledges, this is the second model including shelter location, self-evacuation and supported-evacuation however, SISLER deals with them based on the provided meta-analysis. The model is solved through a Branch-and-Cut algorithm of an off-the-shelf software, enriched with valid inequalities adapted from the literature. Computational results are reported for both testbed instances and a realistic case study

    Survivability schemes for metro ethernet networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Fiber optical network design problems : case for Turkey

    Get PDF
    Ankara : The Department of Industrial Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical references leaves 102-110.The problems within scope of this thesis are based on an application arising from one of the largest Internet service providers operating in Turkey. There are mainly two different problems: the green field design and copper field re-design. In the green field design problem, the aim is to design a least cost fiber optical network from scratch that will provide high bandwidth Internet access from a given central station to a set of aggregated demand nodes. Such an access can be provided either directly by installing fibers or indirectly by utilizing passive splitters. Insertion loss, bandwidth level and distance limitations should simultaneously be considered in order to provide a least cost design to enable the required service level. On the other hand, in the re-design of the copper field application, the aim is to improve the current service level by augmenting the network through fiber optical wires. Copper rings in the existing infrastructure are augmented with cabinets and direct fiber links from cabinets to demand nodes provide the required coverage to distant nodes. Mathematical models are constructed for both problem specifications. Extensive computational results based on real data from Kartal (45 points) and Bakırköy (74 points) districts in Istanbul show that the proposed models are viable exact solution methodologies for moderate dimensions.Yazar, BaşakM.S

    Self-managed resources in network virtualisation environments

    Get PDF
    Network virtualisation is a promising technique for dealing with the resistance of the Internet to architectural changes, enabling a novel business model in which infrastructure management is decoupled from service provision. It allows infrastructure providers (InPs) who own substrate networks (SNs) to lease chunks of them out to service providers who then create virtual networks (VNs), which can then be re-leased out or used to provide services to end-users. However, the different VNs should be initialised, in which case virtual links and nodes must be mapped to substrate nodes and paths respectively. One of the challenges in the initialisation of VNs is the requirement of an efficient sharing of SN resources. Since the profitability of InPs depends on how many VNs are able to be allocated simultaneously onto the SN, the success of network virtualisation will depend, in part, on how efficiently VNs utilise physical network resources. This thesis contributes to efficient resource sharing in network virtualisation by dividing the problem into three sub-problems: (1) mapping virtual nodes and links to substrate nodes and paths i.e. virtual network embedding (VNE), (2) dynamic managing of the resources allocated to VNs throughout their lifetime (DRA), and (3) provisioning of backup resources to ensure survivability of the VNs. The constrained VNE problem is NP-Hard. As a result, to simplify the solution, many existing approaches propose heuristics that make assumptions (e.g. a SN with infinite resources), some of which would not apply in practical environments. This thesis proposes an improvement in VNE by proposing a one-shot VNE algorithm which is based on column generation (CG). The CG approach starts by solving a restricted version of the problem, and thereafter refines it to obtain a final solution. The objective of a one-shot mapping is to achieve better resource utilisation, while using CG significantly enhances the solution time complexity. In addition current approaches are static in the sense that after the VNE stage, the resources allocated are not altered for the entire lifetime of the VN. The few proposals that do allow for adjustments in original mappings allocate a fixed amount of node and link resources to VNs throughout their life time. Since network load varies with time due to changing user demands, allocating a fixed amount of resources based on peak load could lead to an inefficient utilisation of overall SN resources, whereby, during periods when some virtual nodes and/or links are lightly loaded, SN resources are still reserved for them, while possibly rejecting new VN requests. The second contribution of this thesis are a set of proposals that ensure that SN resources are efficiently utilised, while at the same making sure that the QoS requirements of VNs are met. For this purpose, we propose self-management algorithms in which the SN uses time-difference machine learning techniques to make autonomous decisions with respect to resource allocation. Finally, while some scientific research has already studied multi-domain VNE, the available approaches to survivable VNs have focused on the single InP environment. Since in the more practical situation a network virtualisation environment will involve multiple InPs, and because an extension of network survivability approaches from the single to multi domain environments is not trivial, this thesis proposes a distributed and dynamic approach to survivability in VNs. This is achieved by using a multi-agent-system that uses a multi-attribute negotiation protocol and a dynamic pricing model forming InPs coalitions supporting SNs resource backups. The ultimate objective is to ensure that virtual network operators maximise profitability by minimising penalties resulting from QoS violations.La virtualització de xarxes es una tècnica prometedora per afrontar la resistència d'Internet als canvis arquitectònics, que permet un nou model de negoci en el que la gestió de la infraestructura de xarxa es desacobla de la provisió del servei. Això permet als proveïdors de infraestructura (InPs), propietaris de la xarxa física substrat (SN), llogar segments d'aquesta als proveïdors dels serveis, que crearan xarxes virtuals (VNs) que a l'hora poden re-llogar-se o utilitzar-se per donar servei a usuaris finals. No obstant això, les diferents VNs s'han d'inicialitzar assignant els seus nodes i enllaços als del substrat. Un dels reptes d'aquest procés es el requisit de fer un ús eficient dels recursos de la SN. Donat que el benefici d'un InP depèn del nombre de xarxes virtuals que puguin allotjar-se simultàniament en la SN, l'èxit de la virtualització de xarxes depèn en part de quan eficient es l’ús dels recursos de la xarxa física per part de les VNs. Aquesta Tesi contribueix a la millora de l’eficiència en la compartició de recursos en la virtualització de xarxes dividint el problema en tres sots problemes: (1) assignació de nodes i enllaços virtuals a nodes i enllaços del substrat (VNE), (2) gestió dinàmica dels recursos assignats a les VNs al llarg de la seva vida útil (DRA) i (3) aprovisionament de recursos de backup per assegurar la supervivència de les VNs. La naturalesa del problema VNE el fa “NP-Hard". En conseqüència, per simplificar la solució, moltes de les propostes son heurístiques que es basen en hipòtesis (per exemple, SN amb recursos il•limitats) de difícil compliment en escenaris reals. Aquesta Tesi proposa una millora al problema VNE mitjan_cant un algorisme “one-shot VNE" basat en generació de columnes (CG). La solució CG comena resolent una versió restringida del problema, per tot seguit refinar-la i obtenir la solució final. L'objectiu del “one-shot VNE" es aconseguir millorar l’ús dels recursos, mentre que CG redueix significativament la complexitat temporal del procés. D'altre banda, les solucions actuals son estàtiques, ja que els recursos assignats en la fase VNE no es modifiquen durant tot el temps de vida útil de la VN. Les poques propostes que permeten reajustar l’assignació inicial, es basen en una assignació fixe de recursos a les VNs. No obstant això, degut a que la càrrega de la xarxa varia a conseqüència de la demanda canviant dels usuaris, assignar una quantitat fixe de recursos basada en situacions de càrrega màxima esdevé en ineficiència per infrautilització en períodes de baixa demanda, mentre que en tals períodes de demanda baixa, el tenir recursos reservats, pot originar rebutjos de noves VNs. La segona contribució d'aquesta Tesi es un conjunt de propostes que asseguren l’ús eficient dels recursos de la SN, garantint a la vegada els requeriments de qualitat de servei de totes les VNs. Amb aquesta finalitat es proposen algorismes d’autogestió en els que la SN utilitza tècniques d'aprenentatge de màquines per a materialitzar decisions autònomes en l’assignació dels recursos. Finalment, malgrat que diversos estudis han tractat ja el problema VNE en entorn multi-domini, les propostes actuals de supervivència de xarxes virtuals s'han limitat a contexts d'aprovisionament per part d'un sol InP. En canvi, a la pràctica, la virtualització de xarxes comportarà un entorn d'aprovisionament multi-domini, i com que l’extensió de solucions de supervivència d'un sol domini al multi-domini no es trivial, aquesta Tesi proposa una solució distribuïda i dinàmica per a la supervivència de VNs. Això s'aconsegueix amb un sistema multi-agent que utilitza un protocol de negociació multi-atribut i un model dinàmic de preus per formar coalicions d'InPs que proporcionaran backups als recursos de les SNs. L'objectiu últim es assegurar que els operadors de xarxes virtuals maximitzin beneficis minimitzant les penalitzacions per violació de la QoS.La virtualización de redes es una técnica prometedora para afrontar la resistencia de Internet a cambios arquitectónicos, que permite un nuevo modelo de negocio en el que la gestión de la infraestructura está desacoplada del aprovisionamiento del servicio. Esto permite a los proveedores de infraestructuras (InPs), propietarios de la red física subyacente (SN), alquilar segmentos de la misma a los proveedores de servicio, los cuales crearán redes virtuales (VNs), que a su vez pueden ser realquiladas o usadaspara proveer el servicio a usuarios finales. Sin embargo, las diferentes VNs deben inicializarse, mapeando sus nodos y enlaces en los del substrato. Uno de los retos de este proceso de inicialización es el requisito de hacer un uso eficiente de los recursos de la SN. Dado que el benecio de los InPs depende de cuantas VNs puedan alojarse simultáneamente en la SN, el éxito de la virtualización de redes depende, en parte, de cuan eficiente es el uso de los recursos de red físicos por parte de las VNs. Esta Tesis contribuye a la compartición eficiente de recursos para la virtualización de redes dividiendo el problema en tres sub-problemas: (1) mapeo de nodos y enlaces virtuales sobre nodos y enlaces del substrato (VNE), (2) gestión dinámica de los recursos asignados a las VNs a lo largo de su vida útil (DRA), y (3) aprovisionamiento de recursos de backup para asegurar la supervivencia de las VNs. La naturaleza del problema VNE lo hace “NP-Hard". En consecuencia, para simplificar la solución, muchas de las actuales propuestas son heurísticas que parten de unas suposiciones (por ejemplo, SN con recursos ilimitados) de difícil asumir en la práctica. Esta Tesis propone una mejora al problema VNE mediante un algoritmo “one-shot VNE" basado en generación de columnas (CG). La solución CG comienza resolviendo una versión restringida del problema, para después refinarla y obtener la solución final. El objetivo del “one-shot VNE" es mejorar el uso de los recursos, a la vez que con CG se reduce significativamente la complejidad temporal del proceso. Por otro lado,las propuestas actuales son estáticas, ya que los recursos asignados en la fase VNE no se alteran a lo largo de la vida útil de la VN. Las pocas propuestas que permiten reajustes del mapeado original ubican una cantidad fija de recursos a las VNs. Sin embargo, dado que la carga de red varía con el tiempo, debido a la demanda cambiante de los usuarios, ubicar una cantidad fija de recursos basada en situaciones de pico conduce a un uso ineficiente de los recursos por infrautilización de los mismos en periodos de baja demanda, mientras que en esta situación, al tener los recursos reservados, pueden rechazarse nuevas solicitudes de VNs. La segunda contribución de esta Tesis es un conjunto de propuestas para el uso eficiente de los recursos de la SN, asegurando al mismo tiempo la calidad de servicio de las VNs. Para ello se proponen algoritmos de auto-gestión en los que la SN usa técnicas de aprendizaje de máquinas para materializar decisiones autónomas en la asignación de recursos. Finalmente, aunque determinadas investigaciones ya han estudiado el problema multi-dominio VNE, las propuestas actuales de supervivencia de redes virtuales se han limitado a un entorno de provisión de infraestructura de un solo InP. Sin embargo, en la práctica, la virtualización de redes comportará un entorno de aprovisionamiento con múltiples InPs, y dado a que la extensión de las soluciones de supervivencia de un entorno único a uno multi-dominio no es trivial, esta Tesis propone una solución distribuida y dinámica a la supervivencia de VNs. Esto se consigue mediante un sistema multi-agente que usa un protocolo de negociación multi-atributo y un modelo dinámico de precios para conformar coaliciones de InPs para proporcionar backups a los recursos de las SNs. El objetivo último es asegurar que los operadores de VNs maximicen su beneficio minimizando la penalización por violación de la QoS

    Scalable Column Generation Models and Algorithms for Optical Network Planning Problems

    Get PDF
    Column Generation Method has been proved to be a powerful tool to model and solve large scale optimization problems in various practical domains such as operation management, logistics and computer design. Such a decomposition approach has been also applied in telecommunication for several classes of classical network design and planning problems with a great success. In this thesis, we confirm that Column Generation Methodology is also a powerful tool in solving several contemporary network design problems that come from a rising worldwide demand of heavy traffic (100Gbps, 400Gbps, and 1Tbps) with emphasis on cost-effective and resilient networks. Such problems are very challenging in terms of complexity as well as solution quality. Research in this thesis attacks four challenging design problems in optical networks: design of p-cycles subject to wavelength continuity, design of dependent and independent p-cycles against multiple failures, design of survivable virtual topologies against multiple failures, design of a multirate optical network architecture. For each design problem, we develop a new mathematical models based on Column Generation Decomposition scheme. Numerical results show that Column Generation methodology is the right choice to deal with hard network design problems since it allows us to efficiently solve large scale network instances which have been puzzles for the current state of art. Additionally, the thesis reveals the great flexibility of Column Generation in formulating design problems that have quite different natures as well as requirements. Obtained results in this thesis show that, firstly, the design of p-cycles should be under a wavelength continuity assumption in order to save the converter cost since the difference between the capacity requirement under wavelength conversion vs. under wavelength continuity is insignificant. Secondly, such results which come from our new general design model for failure dependent p-cycles prove the fact that failure dependent p-cycles save significantly spare capacity than failure independent p-cycles. Thirdly, large instances can be quasi-optimally solved in case of survivable topology designs thanks to our new path-formulation model with online generation of augmenting paths. Lastly, the importance of high capacity devices such as 100Gbps transceiver and the impact of the restriction on number of regeneration sites to the provisioning cost of multirate WDM networks are revealed through our new hierarchical Column Generation model
    corecore