5,072 research outputs found
Combined Intra- and Inter-domain Traffic Engineering using Hot-Potato Aware Link Weights Optimization
A well-known approach to intradomain traffic engineering consists in finding
the set of link weights that minimizes a network-wide objective function for a
given intradomain traffic matrix. This approach is inadequate because it
ignores a potential impact on interdomain routing. Indeed, the resulting set of
link weights may trigger BGP to change the BGP next hop for some destination
prefixes, to enforce hot-potato routing policies. In turn, this results in
changes in the intradomain traffic matrix that have not been anticipated by the
link weights optimizer, possibly leading to degraded network performance.
We propose a BGP-aware link weights optimization method that takes these
effects into account, and even turns them into an advantage. This method uses
the interdomain traffic matrix and other available BGP data, to extend the
intradomain topology with external virtual nodes and links, on which all the
well-tuned heuristics of a classical link weights optimizer can be applied. A
key innovative asset of our method is its ability to also optimize the traffic
on the interdomain peering links. We show, using an operational network as a
case study, that our approach does so efficiently at almost no extra
computational cost.Comment: 12 pages, Short version to be published in ACM SIGMETRICS 2008,
International Conference on Measurement and Modeling of Computer Systems,
June 2-6, 2008, Annapolis, Maryland, US
On Resource Pooling and Separation for LRU Caching
Caching systems using the Least Recently Used (LRU) principle have now become
ubiquitous. A fundamental question for these systems is whether the cache space
should be pooled together or divided to serve multiple flows of data item
requests in order to minimize the miss probabilities. In this paper, we show
that there is no straight yes or no answer to this question, depending on
complex combinations of critical factors, including, e.g., request rates,
overlapped data items across different request flows, data item popularities
and their sizes. Specifically, we characterize the asymptotic miss
probabilities for multiple competing request flows under resource pooling and
separation for LRU caching when the cache size is large.
Analytically, we show that it is asymptotically optimal to jointly serve
multiple flows if their data item sizes and popularity distributions are
similar and their arrival rates do not differ significantly; the
self-organizing property of LRU caching automatically optimizes the resource
allocation among them asymptotically. Otherwise, separating these flows could
be better, e.g., when data sizes vary significantly. We also quantify critical
points beyond which resource pooling is better than separation for each of the
flows when the overlapped data items exceed certain levels. Technically, we
generalize existing results on the asymptotic miss probability of LRU caching
for a broad class of heavy-tailed distributions and extend them to multiple
competing flows with varying data item sizes, which also validates the Che
approximation under certain conditions. These results provide new insights on
improving the performance of caching systems
Global attraction of ODE-based mean field models with hyperexponential job sizes
Mean field modeling is a popular approach to assess the performance of large
scale computer systems. The evolution of many mean field models is
characterized by a set of ordinary differential equations that have a unique
fixed point. In order to prove that this unique fixed point corresponds to the
limit of the stationary measures of the finite systems, the unique fixed point
must be a global attractor. While global attraction was established for various
systems in case of exponential job sizes, it is often unclear whether these
proof techniques can be generalized to non-exponential job sizes. In this paper
we show how simple monotonicity arguments can be used to prove global
attraction for a broad class of ordinary differential equations that capture
the evolution of mean field models with hyperexponential job sizes. This class
includes both existing as well as previously unstudied load balancing schemes
and can be used for systems with either finite or infinite buffers. The main
novelty of the approach exists in using a Coxian representation for the
hyperexponential job sizes and a partial order that is stronger than the
componentwise partial order used in the exponential case.Comment: This paper was accepted at ACM Sigmetrics 201
Transfer Learning for Improving Model Predictions in Highly Configurable Software
Modern software systems are built to be used in dynamic environments using
configuration capabilities to adapt to changes and external uncertainties. In a
self-adaptation context, we are often interested in reasoning about the
performance of the systems under different configurations. Usually, we learn a
black-box model based on real measurements to predict the performance of the
system given a specific configuration. However, as modern systems become more
complex, there are many configuration parameters that may interact and we end
up learning an exponentially large configuration space. Naturally, this does
not scale when relying on real measurements in the actual changing environment.
We propose a different solution: Instead of taking the measurements from the
real system, we learn the model using samples from other sources, such as
simulators that approximate performance of the real system at low cost. We
define a cost model that transform the traditional view of model learning into
a multi-objective problem that not only takes into account model accuracy but
also measurements effort as well. We evaluate our cost-aware transfer learning
solution using real-world configurable software including (i) a robotic system,
(ii) 3 different stream processing applications, and (iii) a NoSQL database
system. The experimental results demonstrate that our approach can achieve (a)
a high prediction accuracy, as well as (b) a high model reliability.Comment: To be published in the proceedings of the 12th International
Symposium on Software Engineering for Adaptive and Self-Managing Systems
(SEAMS'17
- …