59 research outputs found

    Towards a Virtualized Next Generation Internet

    Get PDF
    A promising solution to overcome the Internet ossification is network virtualization in which Internet Service Providers (ISPs) are decoupled into two tiers: service providers (SPs), and infrastructure providers (InPs). The former maintain and customize virtual network(s) to meet the service requirement of end-users, which is mapped to the physical network infrastructure that is managed and deployed by the latter via the Virtual Network Embedding (VNE) process. VNE consists of two major components: node assignment, and link mapping, which can be shown to be NP-Complete. In the first part of the dissertation, we present a path-based ILP model for the VNE problem. Our solution employs a branch-and-bound framework to resolve the integrity constraints, while embedding the column generation process to effectively obtain the lower bound for branch pruning. Different from existing approaches, the proposed solution can either obtain an optimal solution or a near-optimal solution with guarantee on the solution quality. A common strategy in VNE algorithm design is to decompose the problem into two sequential sub-problems: node assignment (NA) and link mapping (LM). With this approach, it is inexorable to sacrifice the solution quality since the NA is not holistic and not-reversible. In the second part, we are motivated to answer the question: Is it possible to maintain the simplicity of the Divide-and-Conquer strategy while still achieving optimality? Our answer is based on a decomposition framework supported by the Primal-Dual analysis of the path-based ILP model. This dissertation also attempts to address issues in two frontiers of network virtualization: survivability, and integration of optical substrate. In the third part, we address the survivable network embedding (SNE) problem from a network flow perspective, considering both splittable and non-splittable flows. In addition, the explosive growth of the Internet traffic calls for the support of a bandwidth abundant optical substrate, despite the extra dimensions of complexity caused by the heterogeneities of optical resources, and the physical feature of optical transmission. In this fourth part, we present a holistic view of motivation, architecture, and challenges on the way towards a virtualized optical substrate that supports network virtualization

    Robustness to failures in two-layer communication networks

    Get PDF
    A close look at many existing systems reveals their two- or multi-layer nature, where a number of coexisting networks interact and depend on each other. For instance, in the Internet, any application-level graph (such as a peer-to-peer network) is mapped on the underlying IP network that, in turn, is mapped on a mesh of optical fibers. This layered view sheds new light on the tolerance to errors and attacks of many complex systems. What is observed at a single layer does not necessarily reflect well the state of the entire system. On the contrary, a tiny, seemingly harmless disruption of one layer, may destroy a substantial or essential part of another layer, thus making the whole system useless in practice. In this thesis we consider such two-layer systems. We model them by two graphs at two different layers, where the upper-layer (or logical) graph is mapped onto the lower-layer (physical) graph. Our main goals are the following. First, we study the robustness to failures of existing large-scale two-layer systems. This brings us some valuable insights into the problem, e.g., by identifying common weak points in such systems. Fortunately, these two-layer problems can often be effectively alleviated by a careful system design. Therefore, our second major goal is to propose new designs that increase the robustness of two-layer systems. This thesis is organized in three main parts, where we focus on different examples and aspects of the two-layer system. In the first part, we turn our attention to the existing large-scale two-layer systems, such as peer-to-peer networks, railway networks and the human brain. Our main goal is to study the vulnerability of these systems to random errors and targeted attacks. Our simulations show that (i) two-layer systems are much more vulnerable to errors and attacks than they appear from a single layer perspective, and (ii) attacks are much more harmful than errors, especially when the logical topology is heterogeneous. These results hold across all studied systems. A natural next step consists in improving the failure robustness of two-layer systems. In particular, in the second part of this thesis, we consider the IP/WDM optical networks, where an IP backbone network is mapped on a mesh of optical fibers. The problem lies in designing a survivable mapping, such that no single physical failure disconnects the logical topology. This is an NP-complete problem. We introduce a new concept of piecewise survivability, which makes the problem much easier in practice. This leads us to an efficient and scalable algorithm called SMART, which finds a survivable mapping much faster (often by orders of magnitude) than the other approaches proposed to date. Moreover, the formal analysis of SMART allows us to prove that a given survivable mapping does or does not exist. Finally, this approach helps us to find vulnerable areas in the system, and to effectively reinforce them, e.g., by adding new links. In the third part of this thesis, we shift our attention one layer higher, to the application-over-IP setting. In particular, we consider the design of Application-Level Multicast (ALM) for interactive applications, where a single source sends a delay-constrained data stream to a number of destinations. Interactive ALM should (i) respect stringent delay requirements, and (ii) proactively protect the system against overlay node failures and against (iii) the packet losses at the IP layer. We propose a two-layer-aware approach to this problem. First, we prove that the average packet loss rate observed at the destinations can be effectively approximated by a purely topological metric that, in turn, drops with the amount of IP-level and overlay-level path diversity available in the system. Therefore, we propose a framework that accommodates and generalizes various techniques to increase the path diversity in the system. Within this framework we optimize the structure of ALM. As a result, we reduce the effective loss rate of real Internet topologies by typically 30%-70%, compared to the state of the art. Finally, in addition to the three main parts of the thesis, we also present a set of results inspired by the study of ALM systems, but not directly related to the 'two-layer' paradigm (and thus moved to the Appendix). In particular, we consider a transmission of a delay-sensitive data stream from a single source to a single destination, where the data packets are protected by a Forward Error Correction (FEC) code and sent over multiple paths. We show that the performance of such a scheme can often be further improved. Our key observation is that the propagation times on the available paths often significantly differ, typically by 10-100ms. We propose to exploit these differences by appropriate packet scheduling, which results in a two- to five-fold improvement (reduction) in the effective loss rate

    Optimization in Telecommunication Networks

    Get PDF
    Network design and network synthesis have been the classical optimization problems intelecommunication for a long time. In the recent past, there have been many technologicaldevelopments such as digitization of information, optical networks, internet, and wirelessnetworks. These developments have led to a series of new optimization problems. Thismanuscript gives an overview of the developments in solving both classical and moderntelecom optimization problems.We start with a short historical overview of the technological developments. Then,the classical (still actual) network design and synthesis problems are described with anemphasis on the latest developments on modelling and solving them. Classical results suchas Menger’s disjoint paths theorem, and Ford-Fulkerson’s max-flow-min-cut theorem, butalso Gomory-Hu trees and the Okamura-Seymour cut-condition, will be related to themodels described. Finally, we describe recent optimization problems such as routing andwavelength assignment, and grooming in optical networks.operations research and management science;

    Characterization, design and re-optimization on multi-layer optical networks

    Get PDF
    L'augment de volum de tràfic IP provocat per l'increment de serveis multimèdia com HDTV o vídeo conferència planteja nous reptes als operadors de xarxa per tal de proveir transmissió de dades eficient. Tot i que les xarxes mallades amb multiplexació per divisió de longitud d'ona (DWDM) suporten connexions òptiques de gran velocitat, aquestes xarxes manquen de flexibilitat per suportar tràfic d’inferior granularitat, fet que provoca un pobre ús d'ample de banda. Per fer front al transport d'aquest tràfic heterogeni, les xarxes multicapa representen la millor solució. Les xarxes òptiques multicapa permeten optimitzar la capacitat mitjançant l'empaquetament de connexions de baixa velocitat dins de connexions òptiques de gran velocitat. Durant aquesta operació, es crea i modifica constantment una topologia virtual dinàmica gràcies al pla de control responsable d’aquestes operacions. Donada aquesta dinamicitat, un ús sub-òptim de recursos pot existir a la xarxa en un moment donat. En aquest context, una re-optimizació periòdica dels recursos utilitzats pot ser aplicada, millorant així l'ús de recursos. Aquesta tesi està dedicada a la caracterització, planificació, i re-optimització de xarxes òptiques multicapa de nova generació des d’un punt de vista unificat incloent optimització als nivells de capa física, capa òptica, capa virtual i pla de control. Concretament s'han desenvolupat models estadístics i de programació matemàtica i meta-heurístiques. Aquest objectiu principal s'ha assolit mitjançant cinc objectius concrets cobrint diversos temes oberts de recerca. En primer lloc, proposem una metodologia estadística per millorar el càlcul del factor Q en problemes d'assignació de ruta i longitud d'ona considerant interaccions físiques (IA-RWA). Amb aquest objectiu, proposem dos models estadístics per computar l'efecte XPM (el coll d'ampolla en termes de computació i complexitat) per problemes IA-RWA, demostrant la precisió d’ambdós models en el càlcul del factor Q en escenaris reals de tràfic. En segon lloc i fixant-nos a la capa òptica, presentem un nou particionament del conjunt de longituds d'ona que permet maximitzar, respecte el cas habitual, la quantitat de tràfic extra proveït en entorns de protecció compartida. Concretament, definim diversos models estadístics per estimar la quantitat de tràfic donat un grau de servei objectiu, i diferents models de planificació de xarxa amb l'objectiu de maximitzar els ingressos previstos i el valor actual net de la xarxa. Després de resoldre aquests problemes per xarxes reals, concloem que la nostra proposta maximitza ambdós objectius. En tercer lloc, afrontem el disseny de xarxes multicapa robustes davant de fallida simple a la capa IP/MPLS i als enllaços de fibra. Per resoldre aquest problema eficientment, proposem un enfocament basat en sobre-dimensionar l'equipament de la capa IP/MPLS i recuperar la connectivitat i el comparem amb la solució convencional basada en duplicar la capa IP/MPLS. Després de comparar solucions mitjançant models ILP i heurístiques, concloem que la nostra solució permet obtenir un estalvi significatiu en termes de costos de desplegament. Com a quart objectiu, introduïm un mecanisme adaptatiu per reduir l'ús de ports opto-electrònics (O/E) en xarxes multicapa sota escenaris de tràfic dinàmic. Una formulació ILP i diverses heurístiques són desenvolupades per resoldre aquest problema, que permet reduir significativament l’ús de ports O/E en temps molt curts. Finalment, adrecem el problema de disseny resilient del pla de control GMPLS. Després de proposar un nou model analític per quantificar la resiliència en topologies mallades de pla de control, usem aquest model per proposar un problema de disseny de pla de control. Proposem un procediment iteratiu lineal i una heurística i els usem per resoldre instàncies reals, arribant a la conclusió que es pot reduir significativament la quantitat d'enllaços del pla de control sense afectar la qualitat de servei a la xarxa.The explosion of IP traffic due to the increase of IP-based multimedia services such as HDTV or video conferencing poses new challenges to network operators to provide a cost-effective data transmission. Although Dense Wavelength Division Multiplexing (DWDM) meshed transport networks support high-speed optical connections, these networks lack the flexibility to support sub-wavelength traffic leading to poor bandwidth usage. To cope with the transport of that huge and heterogeneous amount of traffic, multilayer networks represent the most accepted architectural solution. Multilayer optical networks allow optimizing network capacity by means of packing several low-speed traffic streams into higher-speed optical connections (lightpaths). During this operation, a dynamic virtual topology is created and modified the whole time thanks to a control plane responsible for the establishment, maintenance, and release of connections. Because of this dynamicity, a suboptimal allocation of resources may exist at any time. In this context, a periodically resource reallocation could be deployed in the network, thus improving network resource utilization. This thesis is devoted to the characterization, planning, and re-optimization of next-generation multilayer networks from an integral perspective including physical layer, optical layer, virtual layer, and control plane optimization. To this aim, statistical models, mathematical programming models and meta-heuristics are developed. More specifically, this main objective has been attained by developing five goals covering different open issues. First, we provide a statistical methodology to improve the computation of the Q-factor for impairment-aware routing and wavelength assignment problems (IA-RWA). To this aim we propose two statistical models to compute the Cross-Phase Modulation variance (which represents the bottleneck in terms of computation time and complexity) in off-line and on-line IA-RWA problems, proving the accuracy of both models when computing Q-factor values in real traffic scenarios. Second and moving to the optical layer, we present a new wavelength partitioning scheme that allows maximizing the amount of extra traffic provided in shared path protected environments compared with current solutions. Specifically, we define several statistical models to estimate the traffic intensity given a target grade of service, and different network planning problems for maximizing the expected revenues and net present value. After solving these problems for real networks, we conclude that our proposed scheme maximizes both revenues and NPV. Third, we tackle the design of survivable multilayer networks against single failures at the IP/MPLS layer and WSON links. To efficiently solve this problem, we propose a new approach based on over-dimensioning IP/MPLS devices and lightpath connectivity and recovery and we compare it against the conventional solution based on duplicating backbone IP/MPLS nodes. After evaluating both approaches by means of ILP models and heuristic algorithms, we conclude that our proposed approach leads to significant CAPEX savings. Fourth, we introduce an adaptive mechanism to reduce the usage of opto-electronic (O/E) ports of IP/MPLS-over-WSON multilayer networks in dynamic scenarios. A ILP formulation and several heuristics are developed to solve this problem, which allows significantly reducing the usage of O/E ports in very short running times. Finally, we address the design of resilient control plane topologies in GMPLS-enabled transport networks. After proposing a novel analytical model to quantify the resilience in mesh control plane topologies, we use this model to propose a problem to design the control plane topology. An iterative model and a heuristic are proposed and used to solve real instances, concluding that a significant reduction in the number of control plane links can be performed without affecting the quality of service of the network

    Using GRASP and GA to design resilient and cost-effective IP/MPLS networks

    Get PDF
    The main objective of this thesis is to find good quality solutions for representative instances of the problem of designing a resilient and low cost IP/MPLS network, to be deployed over an existing optical transport network. This research is motivated by two complementary real-world application cases, which comprise the most important commercial and academic networks of Uruguay. To achieve this goal, we performed an exhaustive analysis of existing models and technologies. From all of them we took elements that were contrasted with the particular requirements of our counterparts. We highlight among these requirements, the need of getting solutions transparently implementable over a heterogeneous network environment, which limit us to use widely standardized features of related technologies. We decided to create new models more suitable to fit these needs. These models are intrinsically hard to solve (NP-Hard). Thus we developed metaheuristic based algorithms to find solutions to these real-world instances. Evolutionary Algorithms and Greedy Randomized Adaptive Search Procedures obtained the best results. As it usually happens, real-world planning problems are surrounded by uncertainty. Therefore, we have worked closely with our counterparts to reduce the fuzziness upon data to a set of representative cases. They were combined with different strategies of design to get to scenarios, which were translated into instances of these problems. Finally, the algorithms were fed with this information, and from their outcome we derived our results and conclusions

    Survivable Cloud Networking Services

    Get PDF
    Cloud computing paradigms are seeing very strong traction today and are being propelled by advances in multi-core processor, storage, and high-bandwidth networking technologies. Now as this growth unfolds, there is a growing need to distribute cloud services over multiple data-center sites in order to improve speed, responsiveness, as well as reliability. Overall, this trend is pushing the need for virtual network (VN) embedding support at the underlying network layer. Moreover, as more and more mission-critical end-user applications move to the cloud, associated VN survivability concerns are also becoming a key requirement in order to guarantee user service level agreements. Overall, several different types of survivable VN embedding schemes have been developed in recent years. Broadly, these schemes offer resiliency guarantees by pre-provisioning backup resources at service setup time. However, most of these solutions are only geared towards handling isolated single link or single node failures. As such, these designs are largely ineffective against larger regional stressors that can result in multiple system failures. In particular, many cloud service providers are very concerned about catastrophic disaster events such as earthquakes, floods, hurricanes, cascading power outages, and even malicious weapons of mass destruction attacks. Hence there is a pressing need to develop more robust cloud recovery schemes for disaster recovery that leverage underlying distributed networking capabilities. In light of the above, this dissertation proposes a range of solutions to address cloud networking services recovery under multi-failure stressors. First, a novel failure region-disjoint VN protection scheme is proposed to achieve improved efficiency for pre-provisioned protection. Next, enhanced VN mapping schemes are studied with probabilistic considerations to minimize risk for VN requests under stochastic failure scenarios. Finally, novel post-fault VN restoration schemes are also developed to provide viable last-gap recovery mechanisms using partial and full VN remapping strategies. The performance of these various solutions is evaluated using discrete event simulation and is also compared to existing strategies

    Resource Allocation, and Survivability in Network Virtualization Environments

    Get PDF
    Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this thesis, we remove that assumption by formulating the survivable virtual network embedding (SVNE) problem and developing baseline policy heuristics and an efficient hybrid policy heuristic to solve it. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristic for SVNE outperforms baseline heuristics in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time

    Optimization Methods for Optical Long-Haul and Access Networks

    Get PDF
    Optical communications based on fiber optics and the associated technologies have seen remarkable progress over the past two decades. Widespread deployment of optical fiber has been witnessed in backbone and metro networks as well as access segments connecting to customer premises and homes. Designing and developing a reliable, robust and efficient end-to-end optical communication system have thus emerged as topics of utmost importance both to researchers and network operators. To fulfill these requirements, various problems have surfaced and received attention, such as network planning, capacity placement, traffic grooming, traffic scheduling, and bandwidth allocation. The optimal network design aims at addressing (one or more of) these problems based on some optimization objectives. In this thesis, we consider two of the most important problems in optical networks; namely the survivability in optical long-haul networks and the problem of bandwidth allocation and scheduling in optical access networks. For the former, we present efficient and accurate models for availability-aware design and service provisioning in p-cycle based survivable networks. We also derive optimization models for survivable network design based on p-trail, a more general protection structure, and compare its performance with p-cycles. Indeed, major cost savings can be obtained when the optical access and long-haul subnetworks become closer to each other by means of consolidation of access and metro networks. As this distance between long-haul and access networks reduces, and the need and expectations from passive optical access networks (PONs) soar, it becomes crucial to efficiently manage bandwidth in the access while providing the desired level of service availability in the long-haul backbone. We therefore address in this thesis the problem of bandwidth management and scheduling in passive optical networks; we design efficient joint and non-joint scheduling and bandwidth allocation methods for multichannel PON as well as next generation 10Gbps Ethernet PON (10G-EPON) while addressing the problem of coexistence between 10G-EPONs and multichannel PONs

    Traffic and Resource Management in Robust Cloud Data Center Networks

    Get PDF
    Cloud Computing is becoming the mainstream paradigm, as organizations, both large and small, begin to harness its benefits. Cloud computing gained its success for giving IT exactly what it needed: The ability to grow and shrink computing resources, on the go, in a cost-effective manner, without the anguish of infrastructure design and setup. The ability to adapt computing demands to market fluctuations is just one of the many benefits that cloud computing has to offer, this is why this new paradigm is rising rapidly. According to a Gartner report, the total sales of the various cloud services will be worth 204 billion dollars worldwide in 2016. With this massive growth, the performance of the underlying infrastructure is crucial to its success and sustainability. Currently, cloud computing heavily depends on data centers for its daily business needs. In fact, it is through the virtualization of data centers that the concept of "computing as a utility" emerged. However, data center virtualization is still in its infancy; and there exists a plethora of open research issues and challenges related to data center virtualization, including but not limited to, optimized topologies and protocols, embedding design methods and online algorithms, resource provisioning and allocation, data center energy efficiency, fault tolerance issues and fault tolerant design, improving service availability under failure conditions, enabling network programmability, etc. This dissertation will attempt to elaborate and address key research challenges and problems related to the design and operation of efficient virtualized data centers and data center infrastructure for cloud services. In particular, we investigate the problem of scalable traffic management and traffic engineering methods in data center networks and present a decomposition method to exactly solve the problem with considerable runtime improvement over mathematical-based formulations. To maximize the network's admissibility and increase its revenue, cloud providers must make efficient use of their's network resources. This goal is highly correlated with the employed resource allocation/placement schemes; formally known as the virtual network embedding problem. This thesis looks at multi-facets of this latter problem; in particular, we study the embedding problem for services with one-to-many communication mode; or what we denote as the multicast virtual network embedding problem. Then, we tackle the survivable virtual network embedding problem by proposing a fault-tolerance design that provides guaranteed service continuity in the event of server failure. Furthermore, we consider the embedding problem for elastic services in the event of heterogeneous node failures. Finally, in the effort to enable and support data center network programmability, we study the placement problem of softwarized network functions (e.g., load balancers, firewalls, etc.), formally known as the virtual network function assignment problem. Owing to its combinatorial complexity, we propose a novel decomposition method, and we numerically show that it is hundred times faster than mathematical formulations from recent existing literature
    corecore