10,441 research outputs found
Fractional diffusion emulates a human mobility network during a simulated disease outbreak
From footpaths to flight routes, human mobility networks facilitate the
spread of communicable diseases. Control and elimination efforts depend on
characterizing these networks in terms of connections and flux rates of
individuals between contact nodes. In some cases, transport can be
parameterized with gravity-type models or approximated by a diffusive random
walk. As a alternative, we have isolated intranational commercial air traffic
as a case study for the utility of non-diffusive, heavy-tailed transport
models. We implemented new stochastic simulations of a prototypical
influenza-like infection, focusing on the dense, highly-connected United States
air travel network. We show that mobility on this network can be described
mainly by a power law, in agreement with previous studies. Remarkably, we find
that the global evolution of an outbreak on this network is accurately
reproduced by a two-parameter space-fractional diffusion equation, such that
those parameters are determined by the air travel network.Comment: 26 pages, 4 figure
On the accuracy of phase-type approximations of heavy-tailed risk models
Numerical evaluation of ruin probabilities in the classical risk model is an
important problem. If claim sizes are heavy-tailed, then such evaluations are
challenging. To overcome this, an attractive way is to approximate the claim
sizes with a phase-type distribution. What is not clear though is how many
phases are enough in order to achieve a specific accuracy in the approximation
of the ruin probability. The goals of this paper are to investigate the number
of phases required so that we can achieve a pre-specified accuracy for the ruin
probability and to provide error bounds. Also, in the special case of a
completely monotone claim size distribution we develop an algorithm to estimate
the ruin probability by approximating the excess claim size distribution with a
hyperexponential one. Finally, we compare our approximation with the heavy
traffic and heavy tail approximations.Comment: 24 pages, 13 figures, 8 tables, 38 reference
Low latency via redundancy
Low latency is critical for interactive networked applications. But while we
know how to scale systems to increase capacity, reducing latency --- especially
the tail of the latency distribution --- can be much more difficult. In this
paper, we argue that the use of redundancy is an effective way to convert extra
capacity into reduced latency. By initiating redundant operations across
diverse resources and using the first result which completes, redundancy
improves a system's latency even under exceptional conditions. We study the
tradeoff with added system utilization, characterizing the situations in which
replicating all tasks reduces mean latency. We then demonstrate empirically
that replicating all operations can result in significant mean and tail latency
reduction in real-world systems including DNS queries, database servers, and
packet forwarding within networks
A Tandem Fluid Network with L\'evy Input in Heavy Traffic
In this paper we study the stationary workload distribution of a fluid tandem
queue in heavy traffic. We consider different types of L\'evy input, covering
compound Poisson, -stable L\'evy motion (with ), and
Brownian motion. In our analysis we separately deal with L\'evy input processes
with increments that have finite and infinite variance. A distinguishing
feature of this paper is that we do not only consider the usual heavy-traffic
regime, in which the load at one of the nodes goes to unity, but also a regime
in which we simultaneously let the load of both servers tend to one, which, as
it turns out, leads to entirely different heavy-traffic asymptotics. Numerical
experiments indicate that under specific conditions the resulting simultaneous
heavy-traffic approximation significantly outperforms the usual heavy-traffic
approximation
Catalog Dynamics: Impact of Content Publishing and Perishing on the Performance of a LRU Cache
The Internet heavily relies on Content Distribution Networks and transparent
caches to cope with the ever-increasing traffic demand of users. Content,
however, is essentially versatile: once published at a given time, its
popularity vanishes over time. All requests for a given document are then
concentrated between the publishing time and an effective perishing time.
In this paper, we propose a new model for the arrival of content requests,
which takes into account the dynamical nature of the content catalog. Based on
two large traffic traces collected on the Orange network, we use the
semi-experimental method and determine invariants of the content request
process. This allows us to define a simple mathematical model for content
requests; by extending the so-called "Che approximation", we then compute the
performance of a LRU cache fed with such a request process, expressed by its
hit ratio. We numerically validate the good accuracy of our model by comparison
to trace-based simulation.Comment: 13 Pages, 9 figures. Full version of the article submitted to the ITC
2014 conference. Small corrections in the appendix from the previous versio
- …