5 research outputs found

    Virtualisation and resource allocation in MECEnabled metro optical networks

    Get PDF
    The appearance of new network services and the ever-increasing network traffic and number of connected devices will push the evolution of current communication networks towards the Future Internet. In the area of optical networks, wavelength routed optical networks (WRONs) are evolving to elastic optical networks (EONs) in which, thanks to the use of OFDM or Nyquist WDM, it is possible to create super-channels with custom-size bandwidth. The basic element in these networks is the lightpath, i.e., all-optical circuits between two network nodes. The establishment of lightpaths requires the selection of the route that they will follow and the portion of the spectrum to be used in order to carry the requested traffic from the source to the destination node. That problem is known as the routing and spectrum assignment (RSA) problem, and new algorithms must be proposed to address this design problem. Some early studies on elastic optical networks studied gridless scenarios, in which a slice of spectrum of variable size is assigned to a request. However, the most common approach to the spectrum allocation is to divide the spectrum into slots of fixed width and allocate multiple, consecutive spectrum slots to each lightpath, depending on the requested bandwidth. Moreover, EONs also allow the proposal of more flexible routing and spectrum assignment techniques, like the split-spectrum approach in which the request is divided into multiple "sub-lightpaths". In this thesis, four RSA algorithms are proposed combining two different levels of flexibility with the well-known k-shortest paths and first fit heuristics. After comparing the performance of those methods, a novel spectrum assignment technique, Best Gap, is proposed to overcome the inefficiencies emerged when combining the first fit heuristic with highly flexible networks. A simulation study is presented to demonstrate that, thanks to the use of Best Gap, EONs can exploit the network flexibility and reduce the blocking ratio. On the other hand, operators must face profound architectural changes to increase the adaptability and flexibility of networks and ease their management. Thanks to the use of network function virtualisation (NFV), the necessary network functions that must be applied to offer a service can be deployed as virtual appliances hosted by commodity servers, which can be located in data centres, network nodes or even end-user premises. The appearance of new computation and networking paradigms, like multi-access edge computing (MEC), may facilitate the adaptation of communication networks to the new demands. Furthermore, the use of MEC technology will enable the possibility of installing those virtual network functions (VNFs) not only at data centres (DCs) and central offices (COs), traditional hosts of VFNs, but also at the edge nodes of the network. Since data processing is performed closer to the enduser, the latency associated to each service connection request can be reduced. MEC nodes will be usually connected between them and with the DCs and COs by optical networks. In such a scenario, deploying a network service requires completing two phases: the VNF-placement, i.e., deciding the number and location of VNFs, and the VNF-chaining, i.e., connecting the VNFs that the traffic associated to a service must transverse in order to establish the connection. In the chaining process, not only the existence of VNFs with available processing capacity, but the availability of network resources must be taken into account to avoid the rejection of the connection request. Taking into consideration that the backhaul of this scenario will be usually based on WRONs or EONs, it is necessary to design the virtual topology (i.e., the set of lightpaths established in the networks) in order to transport the tra c from one node to another. The process of designing the virtual topology includes deciding the number of connections or lightpaths, allocating them a route and spectral resources, and finally grooming the traffic into the created lightpaths. Lastly, a failure in the equipment of a node in an NFV environment can cause the disruption of the SCs traversing the node. This can cause the loss of huge amounts of data and affect thousands of end-users. In consequence, it is key to provide the network with faultmanagement techniques able to guarantee the resilience of the established connections when a node fails. For the mentioned reasons, it is necessary to design orchestration algorithms which solve the VNF-placement, chaining and network resource allocation problems in 5G networks with optical backhaul. Moreover, some versions of those algorithms must also implements protection techniques to guarantee the resilience system in case of failure. This thesis makes contribution in that line. Firstly, a genetic algorithm is proposed to solve the VNF-placement and VNF-chaining problems in a 5G network with optical backhaul based on star topology: GASM (genetic algorithm for effective service mapping). Then, we propose a modification of that algorithm in order to be applied to dynamic scenarios in which the reconfiguration of the planning is allowed. Furthermore, we enhanced the modified algorithm to include a learning step, with the objective of improving the performance of the algorithm. In this thesis, we also propose an algorithm to solve not only the VNF-placement and VNF-chaining problems but also the design of the virtual topology, considering that a WRON is deployed as the backhaul network connecting MEC nodes and CO. Moreover, a version including individual VNF protection against node failure has been also proposed and the effect of using shared/dedicated and end-to-end SC/individual VNF protection schemes are also analysed. Finally, a new algorithm that solves the VNF-placement and chaining problems and the virtual topology design implementing a new chaining technique is also proposed. Its corresponding versions implementing individual VNF protection are also presented. Furthermore, since the method works with any type of WDM mesh topologies, a technoeconomic study is presented to compare the effect of using different network topologies in both the network performance and cost.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería TelemáticaDoctorado en Tecnologías de la Información y las Telecomunicacione

    Joint optimization of topology, switching, routing and wavelength assignment

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 279-285).To provide end users with economic access to high bandwidth, the architecture of the next generation metropolitan area networks (MANs) needs to be judiciously designed from the cost perspective. In addition to a low initial capital investment, the ultimate goal is to design networks that exhibit excellent scalability - a decreasing cost-per-node-per-unit-traffic as user number and transaction size increase. As an effort to achieve this goal, in this thesis we search for the scalable network architectures over the solution space that embodies the key aspects of optical networks: fiber connection topology, switching architecture selection and resource dimensioning, routing and wavelength assignment (RWA). Due to the inter-related nature of these design elements, we intended to solve the design problem jointly in the optimization process in order to achieve over-all good performance. To evaluate how the cost drives architectural tradeoffs, an analytical approach is taken in most parts of the thesis by first focusing on networks with symmetric and well defined structures (i.e., regular networks) and symmetric traffic patterns (i.e., all-to-all uniform traffic), which are fair representations that give us suggestions of trends, etc.(cont.) We starts with a examination of various measures of regular topologies. The average minimum hop distance plays a crucial role in evaluating the efficiency of network architecture. From the perspective of designing optical networks, the amount of switching resources used at nodes is proportional to the average minimum hop distance. Thus a smaller average minimum hop distance translates into a lower fraction of pass-through traffic and less switching resources required. Next, a first-order cost model is set up and an optimization problem is formulated for the purpose of characterizing the tradeoffs between fiber and switching resources. Via convex optimization techniques, the joint optimization problem is solved analytically for (static) uniform traffic and symmetric networks. Two classes of regular graphs - Generalized Moore Graphs and A-nearest Neighbors Graphs - are identified to yield lower and upper cost bounds, respectively. The investigation of the cost scalability further demonstrates the advantage of the Generalized Moore Graphs as benchmark topologies: with linear switching cost structure, the minimal normalized cost per unit traffic decreases with increasing network size for the Generalized Moore Graphs and their relatives.(cont.) In comparison, for less efficient fiber topologies (e.g., A-nearest Neighbors) and switching cost structures (e.g., quadratic cost), the minimal normalized cost per unit traffic plateaus or even increases with increasing network size. The study also reveals other attractive properties of Generalized Moore Graphs in conjunction with minimum hop routing - the aggregate network load is evenly distributed over each fiber. Thus, Generalized Moore Graphs also require the minimum number of wavelengths to support a given uniform traffic demand. Further more, the theoretical works on the Generalized Moore Graphs and their close relatives are extended to study more realistic design scenarios in two aspects. One aspect addresses the irregular topologies and (static) non-uniform traffic, for which the results of Generalized Moore networks are used to provide useful estimates of network cost, and are thus offering good references for cost-efficient optical networks. The other aspect deals with network design under random demands. Two optimization formulations that incorporate the traffic variability are presented.(cont.) The results show that as physical architecture, Generalized Moore Graphs are most robust (in cost) to the demand uncertainties. Analytical results also provided design guidelines on how optimum dimensioning, network connectivity, and network costs vary as functions of risk aversion, service level requirements, and probability distributions of demands.by Kyle Chi Guan.Ph.D

    Nodal distribution strategies for designing an overlay network for long-term growth

    Get PDF
    Scope and Method of Study:This research looked at nodal distribution design issues associated with building an overlay network on top of an existing legacy network with overlay network switches and links not necessarily matching the switch and link locations of the underlying network. A mathematical model with two basic components, switch costs and link costs, was developed for defining the total cost of a network overlay. The nature of the underlying legacy topology determines the dominant factor, link or switch costs to the total cost function as well as the unit cost for switches and links.Findings and Conclusions:The three design heuristics presented first, locate overlay switches at nodes in the center of the legacy network as opposed to the periphery; second, locate overlay switches at legacy nodes with high connectivity; and third, locate overlay switches at legacy nodes with high traffic flow demands, can be used to help point to the direction of keeping costs under control when design changes are required. Applying the concept of efficient frontiers to the world of network design and building a suite of best designs gives the network designer greater insight into how to design the best network in the face of changing real-world constraints. For the cost model and the case studies evaluated using the design strategies in this study, distributed approaches generally tend to be a good choice when the link costs dominate the total cost function because total path distances and therefore link costs need to be minimized in preference over switch costs. A distributed overlay tends to have lower link costs because there is usually a greater probability that total path distances can be minimized because of greater connectivity. More connections set up the potential for more traffic flow path choices allowing each traffic flow to be sent along shorter paths. In legacy network topology designs that have many nodes with high connectivity, the overlay link costs can be relatively similar between designs and the switch costs can have a large impact upon total cost
    corecore