22 research outputs found

    Survivable virtual topology design in optical WDM networks using nature-inspired algorithms

    Get PDF
    Tez (Doktora) -- İstanbul Teknik Üniversitesi, Bilişim Enstitüsü, 2012Thesis (PhD) -- İstanbul Technical University, Institute of Informatics, 2012Günümüzde bilgisayar ağları hayatımızın önemli bir parçası ve ihtiyaç haline gelmiştir. İstediğimiz veriye, istediğimiz anda, daha hızlı, daha güvenli ve kesintisiz olarak erişme isteğimiz aslında ağ altyapısının nasıl tasarlanacağını belirlemektedir. Kullanıcıların istekleri sürekli artarken, teknolojik gelişmelerle birlikte yeni yöntem ve algoritmalarla bu istekleri karşılamanın yolları aranmaktadır. Ağdaki aktarım hızı, aktarım ortamından doğrudan etkilenmektedir; bugün uzak mesafelere en yüksek kapasiteli ve hızlı aktarımın yapılabileceği ortam ise fiberdir. Fiber optik ağlar, fiberin üstün özelliklerini (hız, düşük bit hata oranı, elektromanyetik ortamlardan etkilenmeme, düşük işaret zayıflaması, fiziksel dayanıklılık, ucuzluk, güvenlilik, vs.) en iyi kullanacak şekilde tasarlanan ağlardır. Günümüzde dünyadaki iletişim ağ altyapısı, omurga ağlardan erişim ağlarına kadar, hızla fiber optik ağlara dönüşmektedir. Optik ağların en önemli özelliklerinden biri veri aktarım hızıdır, tek bir fiberden teorik olarak 50 Tb/s veri aktarımı yapılabileceği hesaplanmaktadır. Bugün, lider iletişim firmaları 100 Gb/s ya da 1 Tb/s hızda veri aktarımı yapacak kanalllardan bahsedebiliyorsa, bu, fiziksel altyapı optik bir omurgadan oluştuğu içindir. Dalgaboyu bölmeli çoğullama (WDM) teknolojisi sayesinde bir fiber üzerinde aynı anda kurulabilecek kanal sayısı, günümüz teknolojisiyle yüzler mertebesine çıkabilmektedir. Dalgaboyu bölmeli çoğullama teknolojisi ile, optik aktarım birbiriyle çakışmayan dalgaboyu bantlarına bölünür ve her bir dalgaboyu istenen hızda çalışan, ışıkyolu olarak adlandırılan, bir iletişim kanalını destekler. Böylece, yakın gelecek için öngörülen çok yüksek hızlara çıkmadan bile, bir fiberden herbiri birkaç on Gb/s hızda çalışan yüz dolayında ışıkyolu geçebilmektedir. Bu kadar yüksek hızlarda veri aktarımı, özellikle her bir fiberinde çok sayıda kanalın taşındığı omurga ağlarda bir konuya büyük önem kazandırmaktadır: Hataya bağışıklık. En sık rastlanan hata olan, bir fiberin, herhangi bir nedenle kesilmesi (çoğunlukla inşaat makineleri tarafından, ya da doğal afetlerce), fiber tamir edilene kadar, her saniyede birkaç terabitlik veri kaybı anlamına gelecektir. Örnek olarak 10 km uzunlukta bir fiberin kopma sıklığı 11 yılda birdir. Omurga ağlarda yüzlerce, bazen binlerce, kilometrelik fiberler döşendiği gözönüne alındığında, böyle bir hata durumu için tedbir alınmaması düşünülemez. Optik ağ üzerindeki herhangi bir fibere zarar gelmesi demek bu fiber üzerinden yönlendirilmiş olan tüm ışıkyollarının kopması demektir. Her bir ışıkyolu üzerinden yüksek miktarda (40 Gb/s) veri aktarımı yapıldığından, böyle bir zarar ciddi veri kayıplarına neden olabilir. Temel olarak fiber kopmasına karşı geliştirilen iki yaklaşım vardır. Birinci yaklaşımda fiber üzerinden geçen her bir bağlantının, yani ışıkyolunun, yedek yollarla korunmasıdır. İkinci yaklaşım ise, özellikle birçok internet uygulamasına da uygun ve yeterli olacak şekilde, ışıkyollarının oluşturduğu sanal topolojinin bağlı kalmasının sağlanmasıdır. Bu ikinci yaklaşımda herbir ışıkyoluna ayrı ayrı yedek koruma yollarının atanması yerine, sanal topolojinin korunması dikkate alınarak, üst katmanların (paket katmanları) koruma mekanizmalarının devreye girebilmesi için gereken minimum koşulların sağlanması amaçlanmaktadır. Birinci yaklaşım belirli düzeylerde garantili bir koruma sağlarken yüksek miktarda ağ kaynağının atıl durmasına neden olmakta, dolayısıyla bu kadar üst düzey koruma gerektirmeyen uygulamalar için pahalı bir çözüm sunmaktadır. Son yıllarda özellikle dikkat çeken ikinci yaklaşım ise, daha ekonomik bir yöntemle iletişimin kopmaması garantisini vermekte, ancak daha yavaş bir düzeltme sağlamaktadır. Günümüzde birçok uygulama bağlantı kopmadığı sürece paket katmanının, yeni yol bulma gibi hata düzeltme mekanizmalarının devreye girmesi için gerekli olan, dakikalar mertebesindeki gecikmelere toleranslıdır (web dolaşımı, dosya aktarımı, mesajlaşma, uzaktan erişim gibi). Bu yaklaşım ilkine göre daha az ağ kaynağının atıl kalmasına neden olarak kullanıcıya daha ekonomik hizmet verilmesini sağlayacaktır. Bu çalışmada üzerinde durduğumuz hataya bağışık sanal topoloji tasarımı problemi de bu ikinci yaklaşımı benimsemektedir. Hataya bağışık sanal topoloji tasarımı problemi kendi içinde dört alt probleme ayrılmaktadır: ışıkyollarının belirlenmesi (sanal topolojiyi oluşturma), bu ışıkyollarının herhangi bir fiber kopması durumunda bile sanal topolojinin bağlı kalmasını sağlayacak sekilde fiziksel topoloji üzerinde yönlendirilmesi, dalgaboyu atanması, ve paket trafiğinin yönlendirilmesi. Bu alt problemler ayrı ayrı çözülebilir. Ancak, bunlar bağımsız problemler değildir ve bunları tek tek çözmek elde edilen çözümün kalitesinin çok düşük olmasına neden olabilir. Bununla birlikte, hataya bağışık sanal topoloji tasarımı problemi NP-karmaşıktır. Karmaşıklığı nedeniyle bu problemin, gerçek boyutlu ağlar için, klasik optimizasyon teknikleriyle kabul edilebilir zamanda çözülmesi mümkün değildir. Bu çalışmada, fiziksel topolojinin ve düğümler arası paket trafiği yoğunluğunun bilindiği durumlar için, hataya bağışık sanal topoloji tasarımı problemi bütün halinde ele alınmaktadır. Tezin ilk aşamasında, hataya bağışık sanal topoloji tasarımı probleminin alt problemi olan hataya bağışık sanal topoloji yönlendirmesi problemi ele alınmıştır. Verilen bir sanal topoloji için en az kaynak kullanarak hataya bağışık yönlendirme yapmak için iki farklı doğa-esinli algoritma önerilmektedir: evrimsel algoritmalar ve karınca kolonisi optimizasyonu. Öncelikle önerilen algoritmaların problem için uygun parametre kümesi belirlenmiş, daha sonra, algoritmaların başarımını ölçmek için, deneysel sonuçlar tamsayı doğrusal programlama (ILP) ile elde edilen sonuçlarla karşılaştırılmışır. Sonuçlar göstermektedir ki; önerdiğimiz iki algoritma da, tamsayı doğrusal programlama ile uygun bir çözüm bulunamayan büyük ölçekli ağlar için dahi, problemi çözebilmektedir. Bunun yanında, doğa-esinli algoritmalar çok daha az CPU zamanı ve hafıza kullanmaktadır. Elde edilen çözüm kalitesi ve çözüm için kullanılan CPU zamanının kabul edilebilir düzeyde olması, her iki doğa-esinli algoritmanın da gerçek boyutlu ağlar için kullanılabileceğini doğrulamaktadır. İkinci aşamada, hataya bağışık sanal topoloji tasarımı problemini bir bütün halinde çözmek için dört farklı üst-sezgisel yöntem önerilmektedir. Önerilen üst-sezgisel yöntemler alt seviyedeki sezgiselleri seçme asamasında dört farklı yöntem kullanmaktadır: evrimsel algoritmalar, benzetimli tavlama, karınca kolonisi optimizasyonu ve uyarlamalı yinelenen yapıcı arama. Deneysel sonuçlar tüm üst-sezgisel yöntemlerin hataya bağışık sanal topoloji tasarımı problemini çözmede başarılı olduğunu göstermektedir. Ancak, karınca kolonisi optimizasyonu tabanlı üst-sezgisel diğerlerine göre daha üstün sonuçlar vermektedir. Işıkyolları üzerindeki trafik akışını dengelemek için, karınca kolonisi optimizasyonu tabanlı üst-sezgisele akış deviasyonu yöntemi de eklenmiştir. Literatürde hataya bağışık sanal topoloji tasarımı problemini ele alan tüm çalışmalar çift fiber kopması durumunu gözardı etmektedir. Bu çalışmada, önerdiğimiz üst-sezgisel yöntemin başarımını hem tek hem de çift fiber kopması durumları için değerlendirdik. Önerdiğimiz yöntem çoklu fiber kopması durumları için çok kolay şekilde adapte edilebilmektedir. Tek yapılması gereken hataya bağışıklık kontrolünü yapan yordamın değiştirilmesidir. Deneysel sonuçlar göstermiştir ki, önerdiğimiz karınca kolonisi optimizasyonu tabanlı üst-sezgisel hataya bağışık sanal topoloji tasarımı problemini hem tek hem de çift fiber kopması durumları için kabul edilebilir bir sürede çözebilmektedir. Üst-sezgisel yöntemlerin hataya bağışık sanal topoloji tasarımı çözmedeki başarımını değerlendirebilmek amacıyla, karınca kolonisi optimizasyonu tabanlı üst-sezgiselle elde edilen sonuçlar, literatürde bu problem için önerilmiş başka bir yöntemle karşılaştırılmıştır. Sonuçlar üst-sezgisel yöntemlerin, çok daha az CPU zamanı kullanarak, problem için daha kaliteli çözümler verdiğini göstermektedir.Today, computer networking has become an integral part of our daily life. The steady increase in user demands of high speed and high bandwidth networks causes researchers to seek out new methods and algorithms to meet these demands. The transmission speed in the network is directly affected by the transmission medium. The most effective medium to transmit data is the fiber. Optical networks are designed for the best usage of the superior properties of the fiber, e.g. high speed, high bandwidth, low bit error rate, low attenuation, physical strength, cheapness, etc. The world's communication network infrastructure, from backbone networks to access networks, is consistently turning into optical networks. One of the most important properties of the optical networks is the data transmission rate (up to 50 Tb/s on a single fiber). Today, with the help of the wavelength division multiplexing (WDM) technology, hundreds of channels can be built on a single fiber. WDM is a technology in which the optical transmission is split into a number of non-overlapping wavelength bands, with each wavelength supporting a single communication channel operating at the desired rate. Since multiple WDM channels, also called lightpaths, can coexist on a single fiber, the huge fiber bandwidth can be utilized. Any damage to a physical link (fiber) on the network causes all the lightpaths routed through this link to be broken. Since huge data transmission (40 Gb/s) over each of these lightpaths is possible, such a damage results in a serious amount of data loss. Two different approaches can be used in order to avoid this situation: 1. Survivability on the physical layer, 2. Survivability on the virtual layer. The first approach is the problem of designing a backup link/path for each link/path of the optical layer. The second approach is the problem of designing the optical layer such that the optical layer remains connected in the event of a single or multiple link failure. While the first approach provides faster protection for time-critical applications (such as, IP phone, telemedicine) by reserving more resources, the second approach, i.e. the survivable virtual topology design, which has attracted a lot of attention in recent years, aims to protect connections using less resources. The problem that will be studied in this project is to develop methods for survivable virtual topology design, that enables effective usage of the resources. Survivable virtual topology design consists of four subproblems: determining a set of lightpaths (forming the virtual topology), routing these lightpaths on the physical topology (routing and wavelength assignment (RWA) problem), so that any single fiber cut does not disconnect the virtual topology (survivable virtual topology mapping), assigning wavelengths, and routing the packet traffic. Each of these subproblems can be solved separately. However, they are not independent problems and solving them one by one may degrade the quality of the final result considerably. Furthermore, the survivable virtual topology design is known to be NP-complete. Because of its complexity, it is not possible to solve the problem optimally in an acceptable amount of time using classical optimization techniques, for real-life sized networks. In this thesis, we solve the survivable virtual topology design problem as a whole, where the physical topology and the packet traffic intensities between nodes are given. In the first phase, we propose two different nature inspired heuristics to find a survivable mapping of a given virtual topology with minimum resource usage. Evolutionary algorithms and ant colony optimization algorithms are applied to the problem. To assess the performance of the proposed algorithms, we compare the experimental results with those obtained through integer linear programming. The results show that both of our algorithms can solve the problem even for large-scale network topologies for which a feasible solution cannot be found using integer linear programming. Moreover, the CPU time and the memory used by the nature inspired heuristics is much lower. In the second phase, we propose four different hyper-heuristic approaches to solve the survivable virtual topology design problem as a whole. Each hyper-heuristic approach is based on a different category of nature inspired heuristics: evolutionary algorithms, ant colony optimization, simulated annealing, and adaptive iterated constructive search. Experimental results show that, all proposed hyper-heuristic approaches are successful in designing survivable virtual topologies. Furthermore, the ant colony optimization based hyper-heuristic outperforms the others. To balance the traffic flow over lightpaths, we adapt a flow-deviation method to the ant colony optimization based hyper-heuristic approach. We explore the performance of our hyper-heuristic approach for both single and double-link failures. The proposed approach can be applied to the multiple-link failure problem instances by only changing the survivability control routine. The experimental results show that our approach can solve the problem for both single-link and double-link failures in a reasonable amount of time. To evaluate the quality of the HH approach solutions, we compare these results with the results obtained using tabu search approach. The results show that HH approach outperforms tabu search approach both in solution quality and CPU time.DoktoraPh

    Time-varying Resilient Virtual Networking Mapping for Multi-location Cloud Data Centers

    Get PDF
    Abstract In the currently dominant cloud computing paradigm, applications are being served in data centers (DCs), which are connected to high capacity optical networks. For bandwidth and consequently cost efficiency reasons, in both DC and optical network domains, virtualization of the physical hardware is exploited. In a DC, it means that multiple so-called virtual machines (VMs) are being hosted on the same physical server. Similarly, the network is partitioned into separate virtual networks, thus providing isolation between distinct virtual network operators (VNOs). Thus, the problem of virtual network mapping arises: how to decide which physical resources to allocate for a particular virtual network? In this thesis, we study that problem in the context of cloud computing with multiple DC sites. This introduces additional flexibility, due to the anycast routing principle: we have the freedom to decide at what particular DC location to serve a particular application. We can exploit this choice to minimize the required resources when solving the virtual network mapping problem. This thesis solves a resilient virtual network mapping problem that optimally decides on the mapping of both network and data center resources, considering time-varying traffic conditions and protecting against possible failures of both network and DC resources. We consider the so-called VNO resilience scheme: rerouting under failure conditions is provided in the virtual network layer. To minimize physical resource capacity requirements, we allow reuse of both network and DC resources: we can reuse the same resources for the rerouting under failure scenarios that are assumed not to occur simultaneously. Since we also protect against DC failures, we allocate backup DC resources, and account for synchronization between primary and backup DCs. To deal with the time variations in the volume and geographical pattern of the application traffic, we investigate the potential benefits (in terms iii of overall bandwidth requirements) of reconfiguring the virtual network mapping from one time period to the next. We provide models with good scalability, and investigate different scenarios to check whether it is worth to change routing for service requirement between time periods. The results come up with our experiments show that the benefits for rerouting is very limited. Keywords: Cloud Computing, Optical Networks, Virtualization, Anycast, VNO resilienc

    Survivable Virtual Network Embedding in Transport Networks

    Get PDF
    Network Virtualization (NV) is perceived as an enabling technology for the future Internet and the 5th Generation (5G) of mobile networks. It is becoming increasingly difficult to keep up with emerging applications’ Quality of Service (QoS) requirements in an ossified Internet. NV addresses the current Internet’s ossification problem by allowing the co-existence of multiple Virtual Networks (VNs), each customized to a specific purpose on the shared Internet. NV also facilitates a new business model, namely, Network-as-a-Service (NaaS), which provides a separation between applications and services, and the networks supporting them. 5G mobile network operators have adopted the NaaS model to partition their physical network resources into multiple VNs (also called network slices) and lease them to service providers. Service providers use the leased VNs to offer customized services satisfying specific QoS requirements without any investment in deploying and managing a physical network infrastructure. The benefits of NV come at additional resource management challenges. A fundamental problem in NV is to efficiently map the virtual nodes and virtual links of a VN to physical nodes and paths, respectively, known as the Virtual Network Embedding (VNE) problem. A VNE that can survive physical resource failures is known as the survivable VNE (SVNE) problem, and has received significant attention recently. In this thesis, we address variants of the SVNE problem with different bandwidth and reliability requirements for transport networks. Specifically, the thesis includes four main contributions. First, a connectivity-aware VNE approach that ensures VN connectivity without bandwidth guarantee in the face of multiple link failures. Second, a joint spare capacity allocation and VNE scheme that provides bandwidth guarantee against link failures by augmenting VNs with necessary spare capacity. Third, a generalized recovery mechanism to re-embed the VNs that are impacted by a physical node failure. Fourth, a reliable VNE scheme with dedicated protection that allows tuning of available bandwidth of a VN during a physical link failure. We show the effectiveness of the proposed SVNE schemes through extensive simulations. We believe that the thesis can set the stage for further research specially in the area of automated failure management for next generation networks

    Resource Management in Softwarized Networks

    Get PDF
    Communication networks are undergoing a major transformation through softwarization, which is changing the way networks are designed, operated, and managed. Network Softwarization is an emerging paradigm where software controls the treatment of network flows, adds value to these flows by software processing, and orchestrates the on-demand creation of customized networks to meet the needs of customer applications. Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Network Virtualization are three cornerstones of the overall transformation trend toward network softwarization. Together, they are empowering network operators to accelerate time-to-market for new services, diversify the supply chain for networking hardware and software, bringing the benefits of agility, economies of scale, and flexibility of cloud computing to networks. The enhanced programmability enabled by softwarization creates unique opportunities for adapting network resources in support of applications and users with diverse requirements. To effectively leverage the flexibility provided by softwarization and realize its full potential, it is of paramount importance to devise proper mechanisms for allocating resources to different applications and users and for monitoring their usage over time. The overarching goal of this dissertation is to advance state-of-the-art in how resources are allocated and monitored and build the foundation for effective resource management in softwarized networks. Specifically, we address four resource management challenges in three key enablers of network softwarization, namely SDN, NFV, and network virtualization. First, we challenge the current practice of realizing network services with monolithic software network functions and propose a microservice-based disaggregated architecture enabling finer-grained resource allocation and scaling. Then, we devise optimal solutions and scalable heuristics for establishing virtual networks with guaranteed bandwidth and guaranteed survivability against failure on multi-layer IP-over-Optical and single-layer IP substrate network, respectively. Finally, we propose adaptive sampling mechanisms for balancing the overhead of softwarized network monitoring and the accuracy of the network view constructed from monitoring data

    Architecture and algorithm for reliable 5G network design

    Get PDF
    This Ph.D. thesis investigates the resilient and cost-efficient design of both C-RAN and Xhaul architectures. Minimization of network resources as well as reuse of already deployed infrastructure, either based on fiber, wavelength, bandwidth or Processing Units (PU), is investigated and shown to be effective to reduce the overall cost. Moreover, the design of a survivable network against a single node (Baseband Unit hotel (BBU), Centralized/Distributed Unit (CU/DU) or link failure proposed. The novel function location algorithm, which adopts dynamic function chaining in relation to the evolution of the traffic estimation also proposed and showed remarkable improvement in terms of bandwidth saving and multiplexing gain with respect to conventional C-RAN. Finally, the adoption of Ethernet-based fronthaul and the introduction of hybrid switches is pursued to further decrease network cost by increasing optical resource usage

    Nodal distribution strategies for designing an overlay network for long-term growth

    Get PDF
    Scope and Method of Study:This research looked at nodal distribution design issues associated with building an overlay network on top of an existing legacy network with overlay network switches and links not necessarily matching the switch and link locations of the underlying network. A mathematical model with two basic components, switch costs and link costs, was developed for defining the total cost of a network overlay. The nature of the underlying legacy topology determines the dominant factor, link or switch costs to the total cost function as well as the unit cost for switches and links.Findings and Conclusions:The three design heuristics presented first, locate overlay switches at nodes in the center of the legacy network as opposed to the periphery; second, locate overlay switches at legacy nodes with high connectivity; and third, locate overlay switches at legacy nodes with high traffic flow demands, can be used to help point to the direction of keeping costs under control when design changes are required. Applying the concept of efficient frontiers to the world of network design and building a suite of best designs gives the network designer greater insight into how to design the best network in the face of changing real-world constraints. For the cost model and the case studies evaluated using the design strategies in this study, distributed approaches generally tend to be a good choice when the link costs dominate the total cost function because total path distances and therefore link costs need to be minimized in preference over switch costs. A distributed overlay tends to have lower link costs because there is usually a greater probability that total path distances can be minimized because of greater connectivity. More connections set up the potential for more traffic flow path choices allowing each traffic flow to be sent along shorter paths. In legacy network topology designs that have many nodes with high connectivity, the overlay link costs can be relatively similar between designs and the switch costs can have a large impact upon total cost

    Optimised Design and Analysis of All-Optical Networks

    Get PDF
    This PhD thesis presents a suite of methods for optimising design and for analysing blocking probabilities of all-optical networks. It thus contributes methodical knowledge to the field of computer assisted planning of optical networks. A two-stage greenfield optical network design optimiser is developed, based on shortest-path algorithms and a comparatively new metaheuristic called simulated allocation. It is able to handle design of all-optical mesh networks with optical cross-connects, considers duct as well as fibre and node costs, and can also design protected networks. The method is assessed through various experiments and is shown to produce good results and to be able to scale up to networks of realistic sizes. A novel method, subpath wavelength grouping, for routing connections in a multigranular all-optical network where several wavelengths can be grouped and switched at band and fibre level is presented. The method uses an unorthodox routing strategy focusing on common subpaths rather than individual connections, and strives to minimise switch port count as well as fibre usage. It is shown to produce cheaper network designs than previous methods when fibre costs are comparatively high. A new optical network concept, the synchronous optical hierarchy, is proposed, in which wavelengths are subdivided into timeslots to match the traffic granularity. Various theoretical properties of this concept are investigated and compared in simulation studies. An integer linear programming model for optical ring network design is presented. Manually designed real world ring networks are studied and it is found that the model can lead to cheaper network design. Moreover, ring and mesh network architectures are compared using real world costs, and it is found that optical cros..

    Journal of Telecommunications and Information Technology, 2009, nr 1

    Get PDF
    kwartalni
    corecore