251 research outputs found

    An ant-based algorithm for distributed routing and wavelength assignment in dynamic optical networks

    Get PDF
    Future optical communication networks are expected to change radically during the next decade. To meet the demanded bandwidth requirements, more dynamism, scalability and automatism will need to be provided. This will also require addressing issues such as the design of highly distributed control plane systems and their associated algorithms to respond to network changes very rapidly. In this work, we propose the use of an ant colony optimization (ACO) algorithm to solve the intrinsic problem of the routing and wavelength assignment (RWA) on wavelength continuity constraint optical networks. The main advantage of the protocol is its distributed nature, which provides higher survivability to network failures or traffic congestion. The protocol has been applied to a specific type of future optical network based on the optical switching of bursts. It has been evaluated through extensive simulations with very promising results, particularly on highly congested scenarios where the load balancing capabilities of the protocol become especially efficient. Results on a partially meshed network like NSFNET show that the ant-based protocol outperforms other RWA algorithms under test in terms of blocking probability without worsening other metrics such as mean route length.Peer ReviewedPostprint (published version

    Distributed Resources Assignment for Optical Burst Switching without Wavelength Conversion (Invited Paper)

    Get PDF
    The amount of bursty Internet traffic leads to develop new architectures and technologies, such as Optical Burst Switching (OBS), to efficiently satisfy future bandwidth requirements. Burst loss probability is an important quality of service metric for OBS due to its bufferless characteristic, even more critical without wavelengths converters. So, resource assignment is an important issue to solve in OBS networks. In this paper, two distributed resources assignment schemes without wavelength conversion capability are proposed. Whereas the first one is applied at the edge nodes to achieve a loss-free core network, the second is an enhanced routing and wavelength assignment scheme applied at core nodes. Simulation results indicate that the first scheme offers a loss-free solution with blocking probability only at ingress nodes and high traffic load. The second one reduces the network-wide burst loss probability significantly compared with other schemes.Postprint (published version

    Cross-layer modeling and optimization of next-generation internet networks

    Get PDF
    Scaling traditional telecommunication networks so that they are able to cope with the volume of future traffic demands and the stringent European Commission (EC) regulations on emissions would entail unaffordable investments. For this very reason, the design of an innovative ultra-high bandwidth power-efficient network architecture is nowadays a bold topic within the research community. So far, the independent evolution of network layers has resulted in isolated, and hence, far-from-optimal contributions, which have eventually led to the issues today's networks are facing such as inefficient energy strategy, limited network scalability and flexibility, reduced network manageability and increased overall network and customer services costs. Consequently, there is currently large consensus among network operators and the research community that cross-layer interaction and coordination is fundamental for the proper architectural design of next-generation Internet networks. This thesis actively contributes to the this goal by addressing the modeling, optimization and performance analysis of a set of potential technologies to be deployed in future cross-layer network architectures. By applying a transversal design approach (i.e., joint consideration of several network layers), we aim for achieving the maximization of the integration of the different network layers involved in each specific problem. To this end, Part I provides a comprehensive evaluation of optical transport networks (OTNs) based on layer 2 (L2) sub-wavelength switching (SWS) technologies, also taking into consideration the impact of physical layer impairments (PLIs) (L0 phenomena). Indeed, the recent and relevant advances in optical technologies have dramatically increased the impact that PLIs have on the optical signal quality, particularly in the context of SWS networks. Then, in Part II of the thesis, we present a set of case studies where it is shown that the application of operations research (OR) methodologies in the desing/planning stage of future cross-layer Internet network architectures leads to the successful joint optimization of key network performance indicators (KPIs) such as cost (i.e., CAPEX/OPEX), resources usage and energy consumption. OR can definitely play an important role by allowing network designers/architects to obtain good near-optimal solutions to real-sized problems within practical running times

    A framework for traffic flow survivability in wireless networks prone to multiple failures and attacks

    Get PDF
    Transmitting packets over a wireless network has always been challenging due to failures that have always occurred as a result of many types of wireless connectivity issues. These failures have caused significant outages, and the delayed discovery and diagnostic testing of these failures have exacerbated their impact on servicing, economic damage, and social elements such as technological trust. There has been research on wireless network failures, but little on multiple failures such as node-node, node-link, and link–link failures. The problem of capacity efficiency and fast recovery from multiple failures has also not received attention. This research develops a capacity efficient evolutionary swarm survivability framework, which encompasses enhanced genetic algorithm (EGA) and ant colony system (ACS) survivability models to swiftly resolve node-node, node-link, and link-link failures for improved service quality. The capacity efficient models were tested on such failures at different locations on both small and large wireless networks. The proposed models were able to generate optimal alternative paths, the bandwidth required for fast rerouting, minimized transmission delay, and ensured the rerouting path fitness and good transmission time for rerouting voice, video and multimedia messages. Increasing multiple link failures reveal that as failure increases, the bandwidth used for rerouting and transmission time also increases. This implies that, failure increases bandwidth usage which leads to transmission delay, which in turn slows down message rerouting. The suggested framework performs better than the popular Dijkstra algorithm, proactive, adaptive and reactive models, in terms of throughput, packet delivery ratio (PDR), speed of transmission, transmission delay and running time. According to the simulation results, the capacity efficient ACS has a PDR of 0.89, the Dijkstra model has a PDR of 0.86, the reactive model has a PDR of 0.83, the proactive model has a PDR of 0.83, and the adaptive model has a PDR of 0.81. Another performance evaluation was performed to compare the proposed model's running time to that of other evaluated routing models. The capacity efficient ACS model has a running time of 169.89ms on average, while the adaptive model has a running time of 1837ms and Dijkstra has a running time of 280.62ms. With these results, capacity efficient ACS outperforms other evaluated routing algorithms in terms of PDR and running time. According to the mean throughput determined to evaluate the performance of the following routing algorithms: capacity efficient EGA has a mean throughput of 621.6, Dijkstra has a mean throughput of 619.3, proactive (DSDV) has a mean throughput of 555.9, and reactive (AODV) has a mean throughput of 501.0. Since Dijkstra is more similar to proposed models in terms of performance, capacity efficient EGA was compared to Dijkstra as follows: Dijkstra has a running time of 3.8908ms and EGA has a running time of 3.6968ms. In terms of running time and mean throughput, the capacity efficient EGA also outperforms the other evaluated routing algorithms. The generated alternative paths from these investigations demonstrate that the proposed framework works well in preventing the problem of data loss in transit and ameliorating congestion issue resulting from multiple failures and server overload which manifests when the process hangs. The optimal solution paths will in turn improve business activities through quality data communications for wireless service providers.School of ComputingPh. D. (Computer Science

    Particle swarm optimization for routing and wavelength assignment in next generation WDM networks.

    Get PDF
    PhDAll-optical Wave Division Multiplexed (WDM) networking is a promising technology for long-haul backbone and large metropolitan optical networks in order to meet the non-diminishing bandwidth demands of future applications and services. Examples could include archival and recovery of data to/from Storage Area Networks (i.e. for banks), High bandwidth medical imaging (for remote operations), High Definition (HD) digital broadcast and streaming over the Internet, distributed orchestrated computing, and peak-demand short-term connectivity for Access Network providers and wireless network operators for backhaul surges. One desirable feature is fast and automatic provisioning. Connection (lightpath) provisioning in optically switched networks requires both route computation and a single wavelength to be assigned for the lightpath. This is called Routing and Wavelength Assignment (RWA). RWA can be classified as static RWA and dynamic RWA. Static RWA is an NP-hard (non-polynomial time hard) optimisation task. Dynamic RWA is even more challenging as connection requests arrive dynamically, on-the-fly and have random connection holding times. Traditionally, global-optimum mathematical search schemes like integer linear programming and graph colouring are used to find an optimal solution for NP-hard problems. However such schemes become unusable for connection provisioning in a dynamic environment, due to the computational complexity and time required to undertake the search. To perform dynamic provisioning, different heuristic and stochastic techniques are used. Particle Swarm Optimisation (PSO) is a population-based global optimisation scheme that belongs to the class of evolutionary search algorithms and has successfully been used to solve many NP-hard optimisation problems in both static and dynamic environments. In this thesis, a novel PSO based scheme is proposed to solve the static RWA case, which can achieve optimal/near-optimal solution. In order to reduce the risk of premature convergence of the swarm and to avoid selecting local optima, a search scheme is proposed to solve the static RWA, based on the position of swarm‘s global best particle and personal best position of each particle. To solve dynamic RWA problem, a PSO based scheme is proposed which can provision a connection within a fraction of a second. This feature is crucial to provisioning services like bandwidth on demand connectivity. To improve the convergence speed of the swarm towards an optimal/near-optimal solution, a novel chaotic factor is introduced into the PSO algorithm, i.e. CPSO, which helps the swarm reach a relatively good solution in fewer iterations. Experimental results for PSO/CPSO based dynamic RWA algorithms show that the proposed schemes perform better compared to other evolutionary techniques like genetic algorithms, ant colony optimization. This is both in terms of quality of solution and computation time. The proposed schemes also show significant improvements in blocking probability performance compared to traditional dynamic RWA schemes like SP-FF and SP-MU algorithms

    Optimization of traffic flows in multiservice telecomunications networks

    Get PDF
    This dissertation investigates routing optimization in IP telecommunication networks, under normal working conditions as well as under failure conditions. The main objectives of the present optimization procedure are the minimization of the maximum link utilization in the network and to provide a configuration that guarantees a 100% survivability degree. Traditionally two different steps are used to achieve this goal. The first one aims to solve the well known “General Routing Problem (GRP)” in order to find the optimal routing network configuration and, successively, a set of “optimal” backup paths is found in order to guarantee network survivability. Furthermore, traditional survivable techniques assume that the planning tasks are performed in a network control center while restoration schemes are implemented distributively in network nodes. In this dissertation innovative linear programming models are presented that, making use of the Multi Protocol Label Switching – Traffic Engineering (MPLS-TE) techniques and IS-IS/OSPF IP routing protocol, melt routing and survivability requirements. The models are extremely flexible, thus it is possible to improve the objective function in order to fit itself to newer applications and/or traffic typologies. The models presented in this dissertation help Internet Service Providers to optimize their network resources and to guarantee connectivity in case of failure, while still be able to offer a good quality of service

    Survivable virtual topology design in optical WDM networks using nature-inspired algorithms

    Get PDF
    Tez (Doktora) -- İstanbul Teknik Üniversitesi, Bilişim Enstitüsü, 2012Thesis (PhD) -- İstanbul Technical University, Institute of Informatics, 2012Günümüzde bilgisayar ağları hayatımızın önemli bir parçası ve ihtiyaç haline gelmiştir. İstediğimiz veriye, istediğimiz anda, daha hızlı, daha güvenli ve kesintisiz olarak erişme isteğimiz aslında ağ altyapısının nasıl tasarlanacağını belirlemektedir. Kullanıcıların istekleri sürekli artarken, teknolojik gelişmelerle birlikte yeni yöntem ve algoritmalarla bu istekleri karşılamanın yolları aranmaktadır. Ağdaki aktarım hızı, aktarım ortamından doğrudan etkilenmektedir; bugün uzak mesafelere en yüksek kapasiteli ve hızlı aktarımın yapılabileceği ortam ise fiberdir. Fiber optik ağlar, fiberin üstün özelliklerini (hız, düşük bit hata oranı, elektromanyetik ortamlardan etkilenmeme, düşük işaret zayıflaması, fiziksel dayanıklılık, ucuzluk, güvenlilik, vs.) en iyi kullanacak şekilde tasarlanan ağlardır. Günümüzde dünyadaki iletişim ağ altyapısı, omurga ağlardan erişim ağlarına kadar, hızla fiber optik ağlara dönüşmektedir. Optik ağların en önemli özelliklerinden biri veri aktarım hızıdır, tek bir fiberden teorik olarak 50 Tb/s veri aktarımı yapılabileceği hesaplanmaktadır. Bugün, lider iletişim firmaları 100 Gb/s ya da 1 Tb/s hızda veri aktarımı yapacak kanalllardan bahsedebiliyorsa, bu, fiziksel altyapı optik bir omurgadan oluştuğu içindir. Dalgaboyu bölmeli çoğullama (WDM) teknolojisi sayesinde bir fiber üzerinde aynı anda kurulabilecek kanal sayısı, günümüz teknolojisiyle yüzler mertebesine çıkabilmektedir. Dalgaboyu bölmeli çoğullama teknolojisi ile, optik aktarım birbiriyle çakışmayan dalgaboyu bantlarına bölünür ve her bir dalgaboyu istenen hızda çalışan, ışıkyolu olarak adlandırılan, bir iletişim kanalını destekler. Böylece, yakın gelecek için öngörülen çok yüksek hızlara çıkmadan bile, bir fiberden herbiri birkaç on Gb/s hızda çalışan yüz dolayında ışıkyolu geçebilmektedir. Bu kadar yüksek hızlarda veri aktarımı, özellikle her bir fiberinde çok sayıda kanalın taşındığı omurga ağlarda bir konuya büyük önem kazandırmaktadır: Hataya bağışıklık. En sık rastlanan hata olan, bir fiberin, herhangi bir nedenle kesilmesi (çoğunlukla inşaat makineleri tarafından, ya da doğal afetlerce), fiber tamir edilene kadar, her saniyede birkaç terabitlik veri kaybı anlamına gelecektir. Örnek olarak 10 km uzunlukta bir fiberin kopma sıklığı 11 yılda birdir. Omurga ağlarda yüzlerce, bazen binlerce, kilometrelik fiberler döşendiği gözönüne alındığında, böyle bir hata durumu için tedbir alınmaması düşünülemez. Optik ağ üzerindeki herhangi bir fibere zarar gelmesi demek bu fiber üzerinden yönlendirilmiş olan tüm ışıkyollarının kopması demektir. Her bir ışıkyolu üzerinden yüksek miktarda (40 Gb/s) veri aktarımı yapıldığından, böyle bir zarar ciddi veri kayıplarına neden olabilir. Temel olarak fiber kopmasına karşı geliştirilen iki yaklaşım vardır. Birinci yaklaşımda fiber üzerinden geçen her bir bağlantının, yani ışıkyolunun, yedek yollarla korunmasıdır. İkinci yaklaşım ise, özellikle birçok internet uygulamasına da uygun ve yeterli olacak şekilde, ışıkyollarının oluşturduğu sanal topolojinin bağlı kalmasının sağlanmasıdır. Bu ikinci yaklaşımda herbir ışıkyoluna ayrı ayrı yedek koruma yollarının atanması yerine, sanal topolojinin korunması dikkate alınarak, üst katmanların (paket katmanları) koruma mekanizmalarının devreye girebilmesi için gereken minimum koşulların sağlanması amaçlanmaktadır. Birinci yaklaşım belirli düzeylerde garantili bir koruma sağlarken yüksek miktarda ağ kaynağının atıl durmasına neden olmakta, dolayısıyla bu kadar üst düzey koruma gerektirmeyen uygulamalar için pahalı bir çözüm sunmaktadır. Son yıllarda özellikle dikkat çeken ikinci yaklaşım ise, daha ekonomik bir yöntemle iletişimin kopmaması garantisini vermekte, ancak daha yavaş bir düzeltme sağlamaktadır. Günümüzde birçok uygulama bağlantı kopmadığı sürece paket katmanının, yeni yol bulma gibi hata düzeltme mekanizmalarının devreye girmesi için gerekli olan, dakikalar mertebesindeki gecikmelere toleranslıdır (web dolaşımı, dosya aktarımı, mesajlaşma, uzaktan erişim gibi). Bu yaklaşım ilkine göre daha az ağ kaynağının atıl kalmasına neden olarak kullanıcıya daha ekonomik hizmet verilmesini sağlayacaktır. Bu çalışmada üzerinde durduğumuz hataya bağışık sanal topoloji tasarımı problemi de bu ikinci yaklaşımı benimsemektedir. Hataya bağışık sanal topoloji tasarımı problemi kendi içinde dört alt probleme ayrılmaktadır: ışıkyollarının belirlenmesi (sanal topolojiyi oluşturma), bu ışıkyollarının herhangi bir fiber kopması durumunda bile sanal topolojinin bağlı kalmasını sağlayacak sekilde fiziksel topoloji üzerinde yönlendirilmesi, dalgaboyu atanması, ve paket trafiğinin yönlendirilmesi. Bu alt problemler ayrı ayrı çözülebilir. Ancak, bunlar bağımsız problemler değildir ve bunları tek tek çözmek elde edilen çözümün kalitesinin çok düşük olmasına neden olabilir. Bununla birlikte, hataya bağışık sanal topoloji tasarımı problemi NP-karmaşıktır. Karmaşıklığı nedeniyle bu problemin, gerçek boyutlu ağlar için, klasik optimizasyon teknikleriyle kabul edilebilir zamanda çözülmesi mümkün değildir. Bu çalışmada, fiziksel topolojinin ve düğümler arası paket trafiği yoğunluğunun bilindiği durumlar için, hataya bağışık sanal topoloji tasarımı problemi bütün halinde ele alınmaktadır. Tezin ilk aşamasında, hataya bağışık sanal topoloji tasarımı probleminin alt problemi olan hataya bağışık sanal topoloji yönlendirmesi problemi ele alınmıştır. Verilen bir sanal topoloji için en az kaynak kullanarak hataya bağışık yönlendirme yapmak için iki farklı doğa-esinli algoritma önerilmektedir: evrimsel algoritmalar ve karınca kolonisi optimizasyonu. Öncelikle önerilen algoritmaların problem için uygun parametre kümesi belirlenmiş, daha sonra, algoritmaların başarımını ölçmek için, deneysel sonuçlar tamsayı doğrusal programlama (ILP) ile elde edilen sonuçlarla karşılaştırılmışır. Sonuçlar göstermektedir ki; önerdiğimiz iki algoritma da, tamsayı doğrusal programlama ile uygun bir çözüm bulunamayan büyük ölçekli ağlar için dahi, problemi çözebilmektedir. Bunun yanında, doğa-esinli algoritmalar çok daha az CPU zamanı ve hafıza kullanmaktadır. Elde edilen çözüm kalitesi ve çözüm için kullanılan CPU zamanının kabul edilebilir düzeyde olması, her iki doğa-esinli algoritmanın da gerçek boyutlu ağlar için kullanılabileceğini doğrulamaktadır. İkinci aşamada, hataya bağışık sanal topoloji tasarımı problemini bir bütün halinde çözmek için dört farklı üst-sezgisel yöntem önerilmektedir. Önerilen üst-sezgisel yöntemler alt seviyedeki sezgiselleri seçme asamasında dört farklı yöntem kullanmaktadır: evrimsel algoritmalar, benzetimli tavlama, karınca kolonisi optimizasyonu ve uyarlamalı yinelenen yapıcı arama. Deneysel sonuçlar tüm üst-sezgisel yöntemlerin hataya bağışık sanal topoloji tasarımı problemini çözmede başarılı olduğunu göstermektedir. Ancak, karınca kolonisi optimizasyonu tabanlı üst-sezgisel diğerlerine göre daha üstün sonuçlar vermektedir. Işıkyolları üzerindeki trafik akışını dengelemek için, karınca kolonisi optimizasyonu tabanlı üst-sezgisele akış deviasyonu yöntemi de eklenmiştir. Literatürde hataya bağışık sanal topoloji tasarımı problemini ele alan tüm çalışmalar çift fiber kopması durumunu gözardı etmektedir. Bu çalışmada, önerdiğimiz üst-sezgisel yöntemin başarımını hem tek hem de çift fiber kopması durumları için değerlendirdik. Önerdiğimiz yöntem çoklu fiber kopması durumları için çok kolay şekilde adapte edilebilmektedir. Tek yapılması gereken hataya bağışıklık kontrolünü yapan yordamın değiştirilmesidir. Deneysel sonuçlar göstermiştir ki, önerdiğimiz karınca kolonisi optimizasyonu tabanlı üst-sezgisel hataya bağışık sanal topoloji tasarımı problemini hem tek hem de çift fiber kopması durumları için kabul edilebilir bir sürede çözebilmektedir. Üst-sezgisel yöntemlerin hataya bağışık sanal topoloji tasarımı çözmedeki başarımını değerlendirebilmek amacıyla, karınca kolonisi optimizasyonu tabanlı üst-sezgiselle elde edilen sonuçlar, literatürde bu problem için önerilmiş başka bir yöntemle karşılaştırılmıştır. Sonuçlar üst-sezgisel yöntemlerin, çok daha az CPU zamanı kullanarak, problem için daha kaliteli çözümler verdiğini göstermektedir.Today, computer networking has become an integral part of our daily life. The steady increase in user demands of high speed and high bandwidth networks causes researchers to seek out new methods and algorithms to meet these demands. The transmission speed in the network is directly affected by the transmission medium. The most effective medium to transmit data is the fiber. Optical networks are designed for the best usage of the superior properties of the fiber, e.g. high speed, high bandwidth, low bit error rate, low attenuation, physical strength, cheapness, etc. The world's communication network infrastructure, from backbone networks to access networks, is consistently turning into optical networks. One of the most important properties of the optical networks is the data transmission rate (up to 50 Tb/s on a single fiber). Today, with the help of the wavelength division multiplexing (WDM) technology, hundreds of channels can be built on a single fiber. WDM is a technology in which the optical transmission is split into a number of non-overlapping wavelength bands, with each wavelength supporting a single communication channel operating at the desired rate. Since multiple WDM channels, also called lightpaths, can coexist on a single fiber, the huge fiber bandwidth can be utilized. Any damage to a physical link (fiber) on the network causes all the lightpaths routed through this link to be broken. Since huge data transmission (40 Gb/s) over each of these lightpaths is possible, such a damage results in a serious amount of data loss. Two different approaches can be used in order to avoid this situation: 1. Survivability on the physical layer, 2. Survivability on the virtual layer. The first approach is the problem of designing a backup link/path for each link/path of the optical layer. The second approach is the problem of designing the optical layer such that the optical layer remains connected in the event of a single or multiple link failure. While the first approach provides faster protection for time-critical applications (such as, IP phone, telemedicine) by reserving more resources, the second approach, i.e. the survivable virtual topology design, which has attracted a lot of attention in recent years, aims to protect connections using less resources. The problem that will be studied in this project is to develop methods for survivable virtual topology design, that enables effective usage of the resources. Survivable virtual topology design consists of four subproblems: determining a set of lightpaths (forming the virtual topology), routing these lightpaths on the physical topology (routing and wavelength assignment (RWA) problem), so that any single fiber cut does not disconnect the virtual topology (survivable virtual topology mapping), assigning wavelengths, and routing the packet traffic. Each of these subproblems can be solved separately. However, they are not independent problems and solving them one by one may degrade the quality of the final result considerably. Furthermore, the survivable virtual topology design is known to be NP-complete. Because of its complexity, it is not possible to solve the problem optimally in an acceptable amount of time using classical optimization techniques, for real-life sized networks. In this thesis, we solve the survivable virtual topology design problem as a whole, where the physical topology and the packet traffic intensities between nodes are given. In the first phase, we propose two different nature inspired heuristics to find a survivable mapping of a given virtual topology with minimum resource usage. Evolutionary algorithms and ant colony optimization algorithms are applied to the problem. To assess the performance of the proposed algorithms, we compare the experimental results with those obtained through integer linear programming. The results show that both of our algorithms can solve the problem even for large-scale network topologies for which a feasible solution cannot be found using integer linear programming. Moreover, the CPU time and the memory used by the nature inspired heuristics is much lower. In the second phase, we propose four different hyper-heuristic approaches to solve the survivable virtual topology design problem as a whole. Each hyper-heuristic approach is based on a different category of nature inspired heuristics: evolutionary algorithms, ant colony optimization, simulated annealing, and adaptive iterated constructive search. Experimental results show that, all proposed hyper-heuristic approaches are successful in designing survivable virtual topologies. Furthermore, the ant colony optimization based hyper-heuristic outperforms the others. To balance the traffic flow over lightpaths, we adapt a flow-deviation method to the ant colony optimization based hyper-heuristic approach. We explore the performance of our hyper-heuristic approach for both single and double-link failures. The proposed approach can be applied to the multiple-link failure problem instances by only changing the survivability control routine. The experimental results show that our approach can solve the problem for both single-link and double-link failures in a reasonable amount of time. To evaluate the quality of the HH approach solutions, we compare these results with the results obtained using tabu search approach. The results show that HH approach outperforms tabu search approach both in solution quality and CPU time.DoktoraPh

    Network Virtualization Over Elastic Optical Networks: A Survey of Allocation Algorithms

    Get PDF
    Network virtualization has emerged as a paradigm for cloud computing services by providing key functionalities such as abstraction of network resources kept hidden to the cloud service user, isolation of different cloud computing applications, flexibility in terms of resources granularity, and on‐demand setup/teardown of service. In parallel, flex‐grid (also known as elastic) optical networks have become an alternative to deal with the constant traffic growth. These advances have triggered research on network virtualization over flex‐grid optical networks. Effort has been focused on the design of flexible and virtualized devices, on the definition of network architectures and on virtual network allocation algorithms. In this chapter, a survey on the virtual network allocation algorithms over flexible‐grid networks is presented. Proposals are classified according to a taxonomy made of three main categories: performance metrics, operation conditions and the type of service offered to users. Based on such classification, this work also identifies open research areas as multi‐objective optimization approaches, distributed architectures, meta‐heuristics, reconfiguration and protection mechanisms for virtual networks over elastic optical networks

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing
    corecore