563 research outputs found

    p-Cycle Based Protection in WDM Mesh Networks

    Get PDF
    Abstract p-Cycle Based Protection in WDM Mesh Networks Honghui Li, Ph.D. Concordia University, 2012 WDM techniques enable single fiber to carry huge amount of data. However, optical WDM networks are prone to failures, and therefore survivability is a very important requirement in the design of optical networks. In the context of network survivability, p-cycle based schemes attracted extensive research interests as they well balance the recovery speed and the capacity efficiency. Towards the design of p-cycle based survivableWDM mesh networks, some issues still need to be addressed. The conventional p-cycle design models and solution methods suffers from scalability issues. Besides, most studies on the design of p-cycle based schemes only cope with single link failures without any concern about single node failures. Moreover, loop backs may exist in the recovery paths along p-cycles, which lead to unnecessary stretching of the recovery path lengths. This thesis investigates the scalable and efficient design of segment p-cycles against single link failures. The optimization models and their solutions rely on large-scale optimization techniques, namely, Column Generation (CG) modeling and solution, where segment pcycle candidates are dynamically generated during the optimization process. To ensure full node protection in the context of link p-cycles, we propose an efficient protection scheme, called node p-cycles, and develop a scalable optimization design model. It is shown that, depending on the network topology, node p-cycles sometimes outperform path p-cycles in iii terms of capacity efficiency. Also, an enhanced segment p-cycle scheme is proposed, entitled segment Np-cycles, for full link and node protection. Again, the CG-based optimization models are developed for the design of segment Np-cycles. Two objectives are considered, minimizing the spare capacity usage and minimizing the CAPEX cost. It is shown that segment Np-cycles can ensure full node protection with marginal extra cost in comparison with segment p-cycles for link protection. Segment Np-cycles provide faster recovery speed than path p-cycles although they are slightly more costly than path p-cycles. Furthermore, we propose the shortcut p-cycle scheme, i.e., p-cycles free of loop backs for full node and link protection, in addition to shortcuts in the protection paths. A CG-based optimization model for the design of shortcut p-cycles is formulated as well. It is shown that, for full node protection, shortcut p-cycles have advantages over path p-cycles with respect to capacity efficiency and recovery speed. We have studied a whole sequence of protection schemes from link p-cycles to path p-cycles, and concluded that the best compromise is the segment Np-cycle scheme for full node protection with respect to capacity efficiency and recovery time. Therefore, this thesis offers to network operators several interesting alternatives to path p-cycles in the design of survivable WDM mesh networks against any single link/node failures

    Survivability aspects of future optical backbone networks

    Get PDF
    In huidige glasvezelnetwerken kan een enkele vezel een gigantische hoeveelheid data dragen, ruwweg het equivalent van 25 miljoen gelijktijdige telefoongesprekken. Hierdoor zullen netwerkstoringen, zoals breuken van een glasvezelkabel, de communicatie van een groot aantal eindgebruikers verstoren. Netwerkoperatoren kiezen er dan ook voor om hun netwerk zo te bouwen dat zulke grote storingen automatisch opgevangen worden. Dit proefschrift spitst zich toe op twee aspecten rond de overleefbaarheid in toekomstige optische netwerken. De eerste doelstelling die beoogd wordt is het tot stand brengen vanrobuuste dataverbindingen over meerdere netwerken. Door voldoende betrouwbare verbindingen tot stand te brengen over een infrastructuur die niet door een enkele entiteit wordt beheerd kan men bv. weredwijd Internettelevisie van hoge kwaliteit aanbieden. De bestudeerde oplossing heeft niet enkel tot doel om deze zeer betrouwbare verbinding te berekenen, maar ook om dit te bewerkstelligen met een minimum aan gebruikte netwerkcapaciteit. De tweede doelstelling was om een antwoord te formuleren om de vraag hoe het toepassen van optische schakelsystemen gebaseerd op herconfigureerbare optische multiplexers een impact heeft op de overleefbaarheid van een optisch netwerk. Bij lagere volumes hebben optisch geschakelde netwerken weinig voordeel van dergelijke gesofistikeerde methoden. Elektronisch geschakelde netwerken vertonen geen afhankelijkheid van het datavolume en hebben altijd baat bij optimalisatie

    Scalable Column Generation Models and Algorithms for Optical Network Planning Problems

    Get PDF
    Column Generation Method has been proved to be a powerful tool to model and solve large scale optimization problems in various practical domains such as operation management, logistics and computer design. Such a decomposition approach has been also applied in telecommunication for several classes of classical network design and planning problems with a great success. In this thesis, we confirm that Column Generation Methodology is also a powerful tool in solving several contemporary network design problems that come from a rising worldwide demand of heavy traffic (100Gbps, 400Gbps, and 1Tbps) with emphasis on cost-effective and resilient networks. Such problems are very challenging in terms of complexity as well as solution quality. Research in this thesis attacks four challenging design problems in optical networks: design of p-cycles subject to wavelength continuity, design of dependent and independent p-cycles against multiple failures, design of survivable virtual topologies against multiple failures, design of a multirate optical network architecture. For each design problem, we develop a new mathematical models based on Column Generation Decomposition scheme. Numerical results show that Column Generation methodology is the right choice to deal with hard network design problems since it allows us to efficiently solve large scale network instances which have been puzzles for the current state of art. Additionally, the thesis reveals the great flexibility of Column Generation in formulating design problems that have quite different natures as well as requirements. Obtained results in this thesis show that, firstly, the design of p-cycles should be under a wavelength continuity assumption in order to save the converter cost since the difference between the capacity requirement under wavelength conversion vs. under wavelength continuity is insignificant. Secondly, such results which come from our new general design model for failure dependent p-cycles prove the fact that failure dependent p-cycles save significantly spare capacity than failure independent p-cycles. Thirdly, large instances can be quasi-optimally solved in case of survivable topology designs thanks to our new path-formulation model with online generation of augmenting paths. Lastly, the importance of high capacity devices such as 100Gbps transceiver and the impact of the restriction on number of regeneration sites to the provisioning cost of multirate WDM networks are revealed through our new hierarchical Column Generation model

    Characterization, design and re-optimization on multi-layer optical networks

    Get PDF
    L'augment de volum de tràfic IP provocat per l'increment de serveis multimèdia com HDTV o vídeo conferència planteja nous reptes als operadors de xarxa per tal de proveir transmissió de dades eficient. Tot i que les xarxes mallades amb multiplexació per divisió de longitud d'ona (DWDM) suporten connexions òptiques de gran velocitat, aquestes xarxes manquen de flexibilitat per suportar tràfic d’inferior granularitat, fet que provoca un pobre ús d'ample de banda. Per fer front al transport d'aquest tràfic heterogeni, les xarxes multicapa representen la millor solució. Les xarxes òptiques multicapa permeten optimitzar la capacitat mitjançant l'empaquetament de connexions de baixa velocitat dins de connexions òptiques de gran velocitat. Durant aquesta operació, es crea i modifica constantment una topologia virtual dinàmica gràcies al pla de control responsable d’aquestes operacions. Donada aquesta dinamicitat, un ús sub-òptim de recursos pot existir a la xarxa en un moment donat. En aquest context, una re-optimizació periòdica dels recursos utilitzats pot ser aplicada, millorant així l'ús de recursos. Aquesta tesi està dedicada a la caracterització, planificació, i re-optimització de xarxes òptiques multicapa de nova generació des d’un punt de vista unificat incloent optimització als nivells de capa física, capa òptica, capa virtual i pla de control. Concretament s'han desenvolupat models estadístics i de programació matemàtica i meta-heurístiques. Aquest objectiu principal s'ha assolit mitjançant cinc objectius concrets cobrint diversos temes oberts de recerca. En primer lloc, proposem una metodologia estadística per millorar el càlcul del factor Q en problemes d'assignació de ruta i longitud d'ona considerant interaccions físiques (IA-RWA). Amb aquest objectiu, proposem dos models estadístics per computar l'efecte XPM (el coll d'ampolla en termes de computació i complexitat) per problemes IA-RWA, demostrant la precisió d’ambdós models en el càlcul del factor Q en escenaris reals de tràfic. En segon lloc i fixant-nos a la capa òptica, presentem un nou particionament del conjunt de longituds d'ona que permet maximitzar, respecte el cas habitual, la quantitat de tràfic extra proveït en entorns de protecció compartida. Concretament, definim diversos models estadístics per estimar la quantitat de tràfic donat un grau de servei objectiu, i diferents models de planificació de xarxa amb l'objectiu de maximitzar els ingressos previstos i el valor actual net de la xarxa. Després de resoldre aquests problemes per xarxes reals, concloem que la nostra proposta maximitza ambdós objectius. En tercer lloc, afrontem el disseny de xarxes multicapa robustes davant de fallida simple a la capa IP/MPLS i als enllaços de fibra. Per resoldre aquest problema eficientment, proposem un enfocament basat en sobre-dimensionar l'equipament de la capa IP/MPLS i recuperar la connectivitat i el comparem amb la solució convencional basada en duplicar la capa IP/MPLS. Després de comparar solucions mitjançant models ILP i heurístiques, concloem que la nostra solució permet obtenir un estalvi significatiu en termes de costos de desplegament. Com a quart objectiu, introduïm un mecanisme adaptatiu per reduir l'ús de ports opto-electrònics (O/E) en xarxes multicapa sota escenaris de tràfic dinàmic. Una formulació ILP i diverses heurístiques són desenvolupades per resoldre aquest problema, que permet reduir significativament l’ús de ports O/E en temps molt curts. Finalment, adrecem el problema de disseny resilient del pla de control GMPLS. Després de proposar un nou model analític per quantificar la resiliència en topologies mallades de pla de control, usem aquest model per proposar un problema de disseny de pla de control. Proposem un procediment iteratiu lineal i una heurística i els usem per resoldre instàncies reals, arribant a la conclusió que es pot reduir significativament la quantitat d'enllaços del pla de control sense afectar la qualitat de servei a la xarxa.The explosion of IP traffic due to the increase of IP-based multimedia services such as HDTV or video conferencing poses new challenges to network operators to provide a cost-effective data transmission. Although Dense Wavelength Division Multiplexing (DWDM) meshed transport networks support high-speed optical connections, these networks lack the flexibility to support sub-wavelength traffic leading to poor bandwidth usage. To cope with the transport of that huge and heterogeneous amount of traffic, multilayer networks represent the most accepted architectural solution. Multilayer optical networks allow optimizing network capacity by means of packing several low-speed traffic streams into higher-speed optical connections (lightpaths). During this operation, a dynamic virtual topology is created and modified the whole time thanks to a control plane responsible for the establishment, maintenance, and release of connections. Because of this dynamicity, a suboptimal allocation of resources may exist at any time. In this context, a periodically resource reallocation could be deployed in the network, thus improving network resource utilization. This thesis is devoted to the characterization, planning, and re-optimization of next-generation multilayer networks from an integral perspective including physical layer, optical layer, virtual layer, and control plane optimization. To this aim, statistical models, mathematical programming models and meta-heuristics are developed. More specifically, this main objective has been attained by developing five goals covering different open issues. First, we provide a statistical methodology to improve the computation of the Q-factor for impairment-aware routing and wavelength assignment problems (IA-RWA). To this aim we propose two statistical models to compute the Cross-Phase Modulation variance (which represents the bottleneck in terms of computation time and complexity) in off-line and on-line IA-RWA problems, proving the accuracy of both models when computing Q-factor values in real traffic scenarios. Second and moving to the optical layer, we present a new wavelength partitioning scheme that allows maximizing the amount of extra traffic provided in shared path protected environments compared with current solutions. Specifically, we define several statistical models to estimate the traffic intensity given a target grade of service, and different network planning problems for maximizing the expected revenues and net present value. After solving these problems for real networks, we conclude that our proposed scheme maximizes both revenues and NPV. Third, we tackle the design of survivable multilayer networks against single failures at the IP/MPLS layer and WSON links. To efficiently solve this problem, we propose a new approach based on over-dimensioning IP/MPLS devices and lightpath connectivity and recovery and we compare it against the conventional solution based on duplicating backbone IP/MPLS nodes. After evaluating both approaches by means of ILP models and heuristic algorithms, we conclude that our proposed approach leads to significant CAPEX savings. Fourth, we introduce an adaptive mechanism to reduce the usage of opto-electronic (O/E) ports of IP/MPLS-over-WSON multilayer networks in dynamic scenarios. A ILP formulation and several heuristics are developed to solve this problem, which allows significantly reducing the usage of O/E ports in very short running times. Finally, we address the design of resilient control plane topologies in GMPLS-enabled transport networks. After proposing a novel analytical model to quantify the resilience in mesh control plane topologies, we use this model to propose a problem to design the control plane topology. An iterative model and a heuristic are proposed and used to solve real instances, concluding that a significant reduction in the number of control plane links can be performed without affecting the quality of service of the network

    Architectures and dynamic bandwidth allocation algorithms for next generation optical access networks

    Get PDF

    Survivable mesh-network design & optimization to support multiple QoP service classes

    Get PDF
    Every second, vast amounts of data are transferred over communication systems around the world, and as a result, the demands on optical infrastructures are extending beyond the traditional, ring-based architecture. The range of content and services available from the Internet is increasing, and network operations are constantly under pressure to expand their optical networks in order to keep pace with the ever increasing demand for higher speed and more reliable links

    Optimization of FPSO Glen Lyon Mooring Lines

    Get PDF
    During oil and gas inspection and extraction operations both in deep and ultra-deep water, vessel mooring is a very important factor for the development of oil fields. For these depths, standard stand-alone surface facilities e.g. jack up rigs or offshore fixed platforms are not suitable due to the harsh collinear and non-collinear environment in-situ (location, waves, surface and underwater current, sea tides, ice, etc.). For deep sea wells clusters, it is usual to use floating production storage offloading (FPSO) as surface platforms for long time exploitation periods. Subsea expenditure, refers the cost of the subsea project and generally includes the capital expenditures (capex) and operational expenditures (opex). In the production of hydrocarbons capex and opex exponentially increases with increasing depth, resulting in a need for precise detailed design phase for analysis of systems to verify components strength, ductility and fatigue, stiffness, instabilities, corrosion etc. The design of oilfields is most of the times overrated (in a very conservative way) due to several requirements and complex models of costs evaluation. After detailed phase and installation of all facilities and components, as well as due to the expected life design for hydrocarbons exploitation all anchoring system shall withstand the environmental loads in order to not compromise the operation. Each oilfield has a unique development, since environmental phenomena are unique in each earth location. This work refers to the optimization process of an anchoring system for deep waters in the Schiehallion Field, or in other words, the complete development of the mooring system for a FPSO, from the positioning in-situ with environmental conditions and vessel characteristics (Orcaflex), further optimization of the mooring system for an equivalent system (Matlab), mechanical design of the mooring system (CATIA), structural detailed analysis (Altair and Nastran) as fatigue life analysis. In order to reproduce all the mooring process, it is performed and initial comparison of the former FPSO (Schiehallion FPSO) that has been working in-situ since 1993 till its replacement for the new vessel (Glen Lyon FPSO). Due to the latest discoveries in the oilfield, the project has to be redesigned alongside with former wells and having in consideration recent discovered wells. Further optimization of the complete fixation system was verified as well as finally detailed structural analysis of specific components in key locations with higher margin of failure. Within this work, all the methodology which led to the optimization of Glen Lyon mooring lines was fully detailed from vessel analysis to detailed mooring mechanical design, constraints and requirements were applied, trade-offs and assumptions made during this critical development phase are presented and discussed.Durante as operações de prospeção e extração de petróleo e gas em águas profundas e ultra profundas, o fundeamento de navios é um importante fator para o desenvolvimento do campo petrolífero. Para estas profundidades, infra-estruturas convencionais e.g. plataformas petrolíferas não são aplicáveis devido ao ambiente violento colinear e não colinear do local (localização, ondas, correntes subaquáticas e de superfície, marés, etc.). Para conjuntos de poços subaquáticos, é comum o uso de Platformas de produção, armazenamento e descarga (FPSO) como plataforma de superficie para periodos de exploração longos. Os custos subaquaticos referem-se ao custo do projeto marinho e normalmente incluem os custos de capital capex e custos operacionais opex. Na produção de hidrocarbonetos os capex e os opex aumentam exponencialmente com o aumento da profundidade, resultando na necessidade do desenvolvimento da fase de projeto detalhado necessário para análises de componentes para verificar a resistência dos mesmos, dutilidade e fadiga, quer na rigidez, instabilidade, corrosão, etc. O projeto de campos petrolíferos são na maioria das vezes sobreestimados (de forma bastante conservativa) devido a imensos requisitos e modelos complexos de avaliação de custos. Após projeto e instalação de todas as infraestruturas e componentes, assim como durante o longo periodo útil de extração de hidrocarbonetos, toda a ancoragem deve suportar as cargas ambientais de forma a não comprometer a operação. Cada campo petrolífero possui um desenvolvimento singular, uma vez que os fenómenos ambientais são únicos em cada localização do globo terrestre. Este trabalho refere a optimização de um sistema de amarração para águas profundas para o campo de Schiehallion, ou por outras palavras, todo o desenvolvimento de ancoragem de um navio FPSO, desde o posicionamento no local com as forças ambientais e as caracteristicas do navio (Orcaflex), posterior otimização do sistema de ancoragem por um sistema equivalente (Matlab), desenho mecânico do sistema de ancoragem (CATIA), cálculo estrutural detalhado (Altair e Nastran) e análise de vida à fadiga. De forma a reproduzir o processo de ancoragem, é efetuada uma comparação inicial do FPSO inicial (Schiehallion FPSO) que esteve em operação no local desde 1993 até à sua substituição pelo novo navio (Glen Lyon FPSO), através da implementação e gestão do campo petrolífero de acordo com os poços antigos como os poços descobertos recentemente. A posterior otimização de todo o sistema de fixação foi verificada assim como a análise estrutural final detalhada dos componentes específicos em localizações especificas com grande probabilidade de falha. Através deste trabalho, todo o processo que leva à otimização das linhas de amarração do Glen Lyon é completamente detalhado desde a análise do navio ao detalhamento do desenho mecânico, os constrangimentos e requisitos que foram aplicados, estudos e opções efetuadas durante a fase de desenvolvimento crítico são apresentados e discutidos

    Optimization of BGP Convergence and Prefix Security in IP/MPLS Networks

    Get PDF
    Multi-Protocol Label Switching-based networks are the backbone of the operation of the Internet, that communicates through the use of the Border Gateway Protocol which connects distinct networks, referred to as Autonomous Systems, together. As the technology matures, so does the challenges caused by the extreme growth rate of the Internet. The amount of BGP prefixes required to facilitate such an increase in connectivity introduces multiple new critical issues, such as with the scalability and the security of the aforementioned Border Gateway Protocol. Illustration of an implementation of an IP/MPLS core transmission network is formed through the introduction of the four main pillars of an Autonomous System: Multi-Protocol Label Switching, Border Gateway Protocol, Open Shortest Path First and the Resource Reservation Protocol. The symbiosis of these technologies is used to introduce the practicalities of operating an IP/MPLS-based ISP network with traffic engineering and fault-resilience at heart. The first research objective of this thesis is to determine whether the deployment of a new BGP feature, which is referred to as BGP Prefix Independent Convergence (PIC), within AS16086 would be a worthwhile endeavour. This BGP extension aims to reduce the convergence delay of BGP Prefixes inside of an IP/MPLS Core Transmission Network, thus improving the networks resilience against faults. Simultaneously, the second research objective was to research the available mechanisms considering the protection of BGP Prefixes, such as with the implementation of the Resource Public Key Infrastructure and the Artemis BGP Monitor for proactive and reactive security of BGP prefixes within AS16086. The future prospective deployment of BGPsec is discussed to form an outlook to the future of IP/MPLS network design. As the trust-based nature of BGP as a protocol has become a distinct vulnerability, thus necessitating the use of various technologies to secure the communications between the Autonomous Systems that form the network to end all networks, the Internet
    corecore