6,320 research outputs found
Routing and Staffing when Servers are Strategic
Traditionally, research focusing on the design of routing and staffing
policies for service systems has modeled servers as having fixed (possibly
heterogeneous) service rates. However, service systems are generally staffed by
people. Furthermore, people respond to workload incentives; that is, how hard a
person works can depend both on how much work there is, and how the work is
divided between the people responsible for it. In a service system, the routing
and staffing policies control such workload incentives; and so the rate servers
work will be impacted by the system's routing and staffing policies. This
observation has consequences when modeling service system performance, and our
objective is to investigate those consequences.
We do this in the context of the M/M/N queue, which is the canonical model
for large service systems. First, we present a model for "strategic" servers
that choose their service rate in order to maximize a trade-off between an
"effort cost", which captures the idea that servers exert more effort when
working at a faster rate, and a "value of idleness", which assumes that servers
value having idle time. Next, we characterize the symmetric Nash equilibrium
service rate under any routing policy that routes based on the server idle
time. We find that the system must operate in a quality-driven regime, in which
servers have idle time, in order for an equilibrium to exist, which implies
that the staffing must have a first-order term that strictly exceeds that of
the common square-root staffing policy. Then, within the class of policies that
admit an equilibrium, we (asymptotically) solve the problem of minimizing the
total cost, when there are linear staffing costs and linear waiting costs.
Finally, we end by exploring the question of whether routing policies that are
based on the service rate, instead of the server idle time, can improve system
performance.Comment: First submitted for journal publication in 2014; accepted for
publication in Operations Research in 2016. Presented in select conferences
throughout 201
Addressing the Challenges in Federating Edge Resources
This book chapter considers how Edge deployments can be brought to bear in a
global context by federating them across multiple geographic regions to create
a global Edge-based fabric that decentralizes data center computation. This is
currently impractical, not only because of technical challenges, but is also
shrouded by social, legal and geopolitical issues. In this chapter, we discuss
two key challenges - networking and management in federating Edge deployments.
Additionally, we consider resource and modeling challenges that will need to be
addressed for a federated Edge.Comment: Book Chapter accepted to the Fog and Edge Computing: Principles and
Paradigms; Editors Buyya, Sriram
Large-scale Join-Idle-Queue system with general service times
A parallel server system with identical servers is considered. The
service time distribution has a finite mean , but otherwise is
arbitrary. Arriving customers are be routed to one of the servers immediately
upon arrival. Join-Idle-Queue routing algorithm is studied, under which an
arriving customer is sent to an idle server, if such is available, and to a
randomly uniformly chosen server, otherwise. We consider the asymptotic regime
where and the customer input flow rate is . Under the
condition , we prove that, as , the sequence of
(appropriately scaled) stationary distributions concentrates at the natural
equilibrium point, with the fraction of occupied servers being constant equal
. In particular, this implies that the steady-state probability of
an arriving customer waiting for service vanishes.Comment: Revision. 11 page
- …