We investigate a computer network consisting of two layers occurring in, for
example, application servers. The first layer incorporates the arrival of jobs
at a network of multi-server nodes, which we model as a many-server Jackson
network. At the second layer, active servers at these nodes act now as
customers who are served by a common CPU. Our main result shows a separation of
time scales in heavy traffic: the main source of randomness occurs at the
(aggregate) CPU layer; the interactions between different types of nodes at the
other layer is shown to converge to a fixed point at a faster time scale; this
also yields a state-space collapse property. Apart from these fundamental
insights, we also obtain an explicit approximation for the joint law of the
number of jobs in the system, which is provably accurate for heavily loaded
systems and performs numerically well for moderately loaded systems. The
obtained results for the model under consideration can be applied to
thread-pool dimensioning in application servers, while the technique seems
applicable to other layered systems too.Comment: 8 pages, 2 figures, 1 table, ITC 24 (2012