3,498 research outputs found
On Resource Pooling and Separation for LRU Caching
Caching systems using the Least Recently Used (LRU) principle have now become
ubiquitous. A fundamental question for these systems is whether the cache space
should be pooled together or divided to serve multiple flows of data item
requests in order to minimize the miss probabilities. In this paper, we show
that there is no straight yes or no answer to this question, depending on
complex combinations of critical factors, including, e.g., request rates,
overlapped data items across different request flows, data item popularities
and their sizes. Specifically, we characterize the asymptotic miss
probabilities for multiple competing request flows under resource pooling and
separation for LRU caching when the cache size is large.
Analytically, we show that it is asymptotically optimal to jointly serve
multiple flows if their data item sizes and popularity distributions are
similar and their arrival rates do not differ significantly; the
self-organizing property of LRU caching automatically optimizes the resource
allocation among them asymptotically. Otherwise, separating these flows could
be better, e.g., when data sizes vary significantly. We also quantify critical
points beyond which resource pooling is better than separation for each of the
flows when the overlapped data items exceed certain levels. Technically, we
generalize existing results on the asymptotic miss probability of LRU caching
for a broad class of heavy-tailed distributions and extend them to multiple
competing flows with varying data item sizes, which also validates the Che
approximation under certain conditions. These results provide new insights on
improving the performance of caching systems
ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads.
Temporal dependence in workloads creates peak congestion that can make service unavailable and reduce system performance. To improve system performability under conditions of temporal dependence, a server should quickly process bursts of requests that may need large service demands. In this paper, we propose and evaluateASIdE, an Autocorrelation-based SIze Estimation, that selectively delays requests which contribute to the workload temporal dependence. ASIdE implicitly approximates the shortest job first (SJF) scheduling policy but without any prior knowledge of job service times. Extensive experiments show that (1) ASIdE achieves good service time estimates from the temporal dependence structure of the workload to implicitly approximate the behavior of SJF; and (2) ASIdE successfully counteracts peak congestion in the workload and improves system performability under a wide variety of settings. Specifically, we show that system capacity under ASIdE is largely increased compared to the first-come first-served (FCFS) scheduling policy and is highly-competitive with SJF. © 2012 IEEE
Redundancy Scheduling with Locally Stable Compatibility Graphs
Redundancy scheduling is a popular concept to improve performance in
parallel-server systems. In the baseline scenario any job can be handled
equally well by any server, and is replicated to a fixed number of servers
selected uniformly at random. Quite often however, there may be heterogeneity
in job characteristics or server capabilities, and jobs can only be replicated
to specific servers because of affinity relations or compatibility constraints.
In order to capture such situations, we consider a scenario where jobs of
various types are replicated to different subsets of servers as prescribed by a
general compatibility graph. We exploit a product-form stationary distribution
and weak local stability conditions to establish a state space collapse in
heavy traffic. In this limiting regime, the parallel-server system with
graph-based redundancy scheduling operates as a multi-class single-server
system, achieving full resource pooling and exhibiting strong insensitivity to
the underlying compatibility constraints.Comment: 28 pages, 4 figure
Transfer Learning for Improving Model Predictions in Highly Configurable Software
Modern software systems are built to be used in dynamic environments using
configuration capabilities to adapt to changes and external uncertainties. In a
self-adaptation context, we are often interested in reasoning about the
performance of the systems under different configurations. Usually, we learn a
black-box model based on real measurements to predict the performance of the
system given a specific configuration. However, as modern systems become more
complex, there are many configuration parameters that may interact and we end
up learning an exponentially large configuration space. Naturally, this does
not scale when relying on real measurements in the actual changing environment.
We propose a different solution: Instead of taking the measurements from the
real system, we learn the model using samples from other sources, such as
simulators that approximate performance of the real system at low cost. We
define a cost model that transform the traditional view of model learning into
a multi-objective problem that not only takes into account model accuracy but
also measurements effort as well. We evaluate our cost-aware transfer learning
solution using real-world configurable software including (i) a robotic system,
(ii) 3 different stream processing applications, and (iii) a NoSQL database
system. The experimental results demonstrate that our approach can achieve (a)
a high prediction accuracy, as well as (b) a high model reliability.Comment: To be published in the proceedings of the 12th International
Symposium on Software Engineering for Adaptive and Self-Managing Systems
(SEAMS'17
- …