277 research outputs found
Lying Your Way to Better Traffic Engineering
To optimize the flow of traffic in IP networks, operators do traffic
engineering (TE), i.e., tune routing-protocol parameters in response to traffic
demands. TE in IP networks typically involves configuring static link weights
and splitting traffic between the resulting shortest-paths via the
Equal-Cost-MultiPath (ECMP) mechanism. Unfortunately, ECMP is a notoriously
cumbersome and indirect means for optimizing traffic flow, often leading to
poor network performance. Also, obtaining accurate knowledge of traffic demands
as the input to TE is elusive, and traffic conditions can be highly variable,
further complicating TE. We leverage recently proposed schemes for increasing
ECMP's expressiveness via carefully disseminated bogus information ("lies") to
design COYOTE, a readily deployable TE scheme for robust and efficient network
utilization. COYOTE leverages new algorithmic ideas to configure (static)
traffic splitting ratios that are optimized with respect to all (even
adversarially chosen) traffic scenarios within the operator's "uncertainty
bounds". Our experimental analyses show that COYOTE significantly outperforms
today's prevalent TE schemes in a manner that is robust to traffic uncertainty
and variation. We discuss experiments with a prototype implementation of
COYOTE
Comparison of single and multi-objective evolutionary algorithms for robust link-state routing
Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios
Aspects of proactive traffic engineering in IP networks
To deliver a reliable communication service over the Internet
it is essential for
the network operator to manage the traffic situation in the network.
The traffic situation is controlled by
the routing function which determines what path traffic follows from source
to destination.
Current practices for setting routing parameters in IP networks are
designed to be simple to manage. This can lead to congestion in
parts of the network while other parts of the network are
far from fully utilized. In this thesis we explore issues related
to optimization of the routing function to balance load in the network
and efficiently deliver a reliable communication service to the users.
The optimization takes into account not only the traffic situation under
normal operational conditions, but also traffic situations that appear
under a wide variety of circumstances deviating from the nominal case.
In order to balance load in the network knowledge of the traffic
situations is needed. Consequently, in this thesis
we investigate methods for efficient derivation of the
traffic situation. The derivation is based on estimation of
traffic demands from link load measurements. The advantage
of using link load measurements is that they are easily obtained and consist
of a limited amount of data that need to be processed. We evaluate and demonstrate how estimation
based on link counts gives the operator a fast and accurate description
of the traffic demands. For the evaluation we have access to a unique data
set of complete traffic demands from an operational
IP backbone.
However, to honor service level agreements at all times the variability
of the traffic needs to be accounted for in the load balancing.
In addition, optimization techniques are often sensitive to errors and
variations in input data. Hence, when an optimized routing setting is
subjected to real traffic demands in the network, performance often
deviate from what can be anticipated from the optimization. Thus,
we identify and model different traffic uncertainties and describe
how the routing setting can be optimized, not only for a nominal case,
but for a wide range of different traffic situations that might appear
in the network.
Our results can be applied in MPLS enabled networks as well as in
networks using link state routing protocols such as the widely used
OSPF and IS-IS protocols. Only minor changes may be needed in current
networks to implement our algorithms.
The contributions of this thesis is that we: demonstrate that it is
possible to estimate the traffic matrix with acceptable precision, and
we develop methods and models for common traffic uncertainties to
account for these uncertainties in the optimization of the routing
configuration. In addition, we identify important properties in the
structure of the traffic to successfully balance uncertain and
varying traffic demands
インターネットプロトコルネットワークにおけるリンク故障に対するリンク重み最適化モデル
As Internet traffic grows with little revenue for service providers, keeping the same service level agreements (SLAs) with limited capital expenditures (CAPEX) is challenging. Major internet service providers backbone traffic grows exponentially while revenue grows logarithmically. Under such situation, CAPEX reduction and an improvement of the infrastructure utilization efficiency are both needed. Link failures are common in Internet Protocol (IP) backbone networks and are an impediment to meeting required quality of service (QoS). After failure occurs, affected traffic is rerouted to adjacent links. This increase in network congestion leads to a reduction of addable traffic and sometimes an increase in packet drop rate. In this thesis network congestion refers to the highest link utilization over all the links in the network. An increase of network congestion may disrupt services with critical SLAs as allowable traffic becomes restricted and packet drop rate increases. Therefore, from a network operator point of view keeping a manageable congestion even under failure is desired A possible approach to deal with congestion increase is to augment link capacity until meeting the manageable congestion threshold. However CAPEX reduction is required. Therefore a minimization of the additional capacity is necessary. In IP networks where OSPF is widely used as a routing protocol, traffic paths are determined by link weights which are preconfigured in advance. Since traffic paths are decided by link weights, links weights therefore decide the links that will get congested. As result they determine the network congestion. Link weights can be optimized in order to minimize the additional capacity under the worst case failure. The worst case failure is the link failure case which generates the highest congestion in the network. In the basic model of link weight optimization, a preventive start-time optimization (PSO) scheme that determines a link weight set to minimize the worst congestion under any single-link failure was presented. Unfortunately, when there is no link failure, that link weight set leads to a congestion that may be higher than the manageable congestion. This is a penalty that will be carried on and thus become a burden especially in networks with few failures. The first part of this thesis proposes a penalty-aware (PA) model that determines a link weight set which reduces that penalty while also reducing the worst congestion by considering both failure and non-failure scenarios. In our PA model we present two simple and effective schemes: preventive start-time optimization without penalty (PSO-NP) and strengthen preventive start-time optimization (S-PSO). PSO-NP suppresses the penalty for the no failure case while reducing the worst congestion under failure, S-PSO minimizes the worst congestion under failure and tries to minimize the penalty compared to PSO for the no failure case. Simulation results show that in several networks, PSO-NP and S-PSO achieve substantial penalty reduction while showing a congestion closed to that of PSO under worst case failure. Despite these facts, PSO-NP and S-PSO do not guarantee an improvement of both the penalty and the worst congestion at the same time as they focus on fixed optimization conditions which restrict the emergence of upgraded solutions for that purpose. A relaxation of these fixed conditions may give us sub-optimal link weight sets that reduce the worst congestion under failure to nearly match that of PSO with a controlled penalty for the no failure case. To determine these sub-optimal sets we expand the penalty-aware model of link weight optimization. We design a scheme where the network operator can set a manageable penalty and find the link weight set that reduces most the worst congestion while maintaining the penalty. This enable network operators to choose more flexible link weight sets accordingly to their requirements under failure and non-failure scenarios. Since setting the penalty to zero would give the same results as PSO-NP, and not setting any penalty condition would give S-PSO, this scheme covers PSO-NP and S-PSO. For this reason we denote it: general preventive start-time optimization (GPSO). Simulation results show that GPSO determines link weight sets with worst congestion reduction equivalent to that of PSO under reduced penalty for the no failure case. GPSO is effective in finding a link weight set that reduces the congestion under both failure and non-failure cases. However it does not guarantee the manageable congestion as it considers penalty. In the second part of this thesis we propose a link-duplication (LD) model that aims to suppress link failure in the first place in order to always meet the manageable congestion. For this purpose we consider the duplication or reinforcement of links which is broadly used to make network reliable. Link duplication provides fast recovery as only switching from the failed link to the backup link will hide the failure at upper layers. However, due to capital expenditure constraints, every link cannot be duplicated. Giving priority to some selected links makes sense. As mentioned above, traffic routes are determined by link weights that are configured in advance. Therefore, choosing an appropriate set of link weights may reduce the number of links that actually need to be duplicated in order to keep a manageable congestion under any single-link failure scenario. Now, PSO also determines the link failure which creates the worst congestion after failure. Since by duplicating this link we can assume it no more fails, PSO can be used to find the smallest number of links to protect so as to guarantee a manageable congestion under any single link failure. The LD model considers multiple protection scenarios before optimizing link weights for the reduction of the overall number of protected links with the congestion of keeping the congestion below the manageable threshold. Simulation results show the LD model delivers a link weight set that requires few link protections to keep the manageable congestion under any single-link failure scenario at the cost of a computation time order L times that of PSO. L represents the number of links in the network. Since the LD model considers additional resources, a fair comparison with the PA model would require considering additional capacity in the PA mode as well. In the third part of this thesis we incorporate additional capacity in the PA model. For the PA model we introduce a mathematical formulation that aims to determine the minimal additional capacity to provide in order to maintain the manageable congestion under any single-link failure scenario. We then compare the LD model to the PA model that incorporates additional capacity features. Evaluation results show that the performance difference between the LD model and the PA model in terms of the required additional capacity depends on the network characteristics. The requirements of latency and continuity for traffic and geographical restriction of services should be taken into consideration when deciding which model to use.電気通信大学201
Towards Robust Traffic Engineering in IP Networks
To deliver a reliable communication service it is essential for
the network operator to manage how traffic flows in the network.
The paths taken by the traffic is controlled by the routing function.
Traditional ways of tuning routing in IP networks are designed
to be simple to manage and are not designed to adapt to the
traffic situation in the network. This can lead to congestion in
parts of the network while other parts of the network is
far from fully utilized. In this thesis we explore issues related
to optimization of the routing function to balance load in the network.
We investigate methods for efficient derivation of the
traffic situation using link count measurements. The advantage
of using link counts is that they are easily obtained and yield
a very limited amount of data. We evaluate and show that estimation
based on link counts give the operator a fast and accurate description
of the traffic demands. For the evaluation we have access to a unique data
set of complete traffic demands from an operational
IP backbone.
Furthermore, we evaluate performance of search heuristics to
set weights in link-state routing protocols. For the evaluation
we have access to complete traffic data from a Tier-1 IP network.
Our findings confirm previous studies who use partial traffic data or
synthetic traffic data. We find that optimization using estimated
traffic demands has little significance to the performance of
the load balancing.
Finally, we device an algorithm that finds a routing setting that is
robust to shifts in traffic patterns due to changes in the
interdomain routing. A set of worst case scenarios caused by the interdomain routing changes
is identified and used to solve a robust routing problem. The evaluation
indicates that performance of the robust routing is close to optimal for
a wide variety of traffic scenarios.
The main contribution of this thesis is that we demonstrate that it is
possible to estimate the traffic matrix with good accuracy and to develop
methods that optimize the routing settings to give strong and robust network
performance. Only minor changes might be necessary in order to implement our
algorithms in existing networks
- …