1,314 research outputs found
On Resource Pooling and Separation for LRU Caching
Caching systems using the Least Recently Used (LRU) principle have now become
ubiquitous. A fundamental question for these systems is whether the cache space
should be pooled together or divided to serve multiple flows of data item
requests in order to minimize the miss probabilities. In this paper, we show
that there is no straight yes or no answer to this question, depending on
complex combinations of critical factors, including, e.g., request rates,
overlapped data items across different request flows, data item popularities
and their sizes. Specifically, we characterize the asymptotic miss
probabilities for multiple competing request flows under resource pooling and
separation for LRU caching when the cache size is large.
Analytically, we show that it is asymptotically optimal to jointly serve
multiple flows if their data item sizes and popularity distributions are
similar and their arrival rates do not differ significantly; the
self-organizing property of LRU caching automatically optimizes the resource
allocation among them asymptotically. Otherwise, separating these flows could
be better, e.g., when data sizes vary significantly. We also quantify critical
points beyond which resource pooling is better than separation for each of the
flows when the overlapped data items exceed certain levels. Technically, we
generalize existing results on the asymptotic miss probability of LRU caching
for a broad class of heavy-tailed distributions and extend them to multiple
competing flows with varying data item sizes, which also validates the Che
approximation under certain conditions. These results provide new insights on
improving the performance of caching systems
Understanding (Un)Written Contracts of NVMe ZNS Devices with zns-tools
Operational and performance characteristics of flash SSDs have long been
associated with a set of Unwritten Contracts due to their hidden, complex
internals and lack of control from the host software stack. These unwritten
contracts govern how data should be stored, accessed, and garbage collected.
The emergence of Zoned Namespace (ZNS) flash devices with their open and
standardized interface allows us to write these unwritten contracts for the
storage stack. However, even with a standardized storage-host interface, due to
the lack of appropriate end-to-end operational data collection tools, the
quantification and reasoning of such contracts remain a challenge. In this
paper, we propose zns.tools, an open-source framework for end-to-end event and
metadata collection, analysis, and visualization for the ZNS SSDs contract
analysis. We showcase how zns.tools can be used to understand how the
combination of RocksDB with the F2FS file system interacts with the underlying
storage. Our tools are available openly at
\url{https://github.com/stonet-research/zns-tools}
Understanding (Un)Written Contracts of NVMe ZNS Devices with zns-tools
Operational and performance characteristics of flash SSDs have long been associated with a set of Unwritten Contracts due to their hidden, complex internals and lack of control from the host software stack. These unwritten contracts govern how data should be stored, accessed, and garbage collected. The emergence of Zoned Namespace (ZNS) flash devices with their open and standardized interface allows us to write these unwritten contracts for the storage stack. However, even with a standardized storage-host interface, due to the lack of appropriate end-to-end operational data collection tools, the quantification and reasoning of such contracts remain a challenge. In this paper, we propose zns.tools, an open-source framework for end-to-end event and metadata collection, analysis, and visualization for the ZNS SSDs contract analysis. We showcase how zns.tools can be used to understand how the combination of RocksDB with the F2FS file system interacts with the underlying storage. Our tools are available openly at \url{https://github.com/stonet-research/zns-tools}
Global attraction of ODE-based mean field models with hyperexponential job sizes
Mean field modeling is a popular approach to assess the performance of large
scale computer systems. The evolution of many mean field models is
characterized by a set of ordinary differential equations that have a unique
fixed point. In order to prove that this unique fixed point corresponds to the
limit of the stationary measures of the finite systems, the unique fixed point
must be a global attractor. While global attraction was established for various
systems in case of exponential job sizes, it is often unclear whether these
proof techniques can be generalized to non-exponential job sizes. In this paper
we show how simple monotonicity arguments can be used to prove global
attraction for a broad class of ordinary differential equations that capture
the evolution of mean field models with hyperexponential job sizes. This class
includes both existing as well as previously unstudied load balancing schemes
and can be used for systems with either finite or infinite buffers. The main
novelty of the approach exists in using a Coxian representation for the
hyperexponential job sizes and a partial order that is stronger than the
componentwise partial order used in the exponential case.Comment: This paper was accepted at ACM Sigmetrics 201
A New Stable Peer-to-Peer Protocol with Non-persistent Peers
Recent studies have suggested that the stability of peer-to-peer networks may
rely on persistent peers, who dwell on the network after they obtain the entire
file. In the absence of such peers, one piece becomes extremely rare in the
network, which leads to instability. Technological developments, however, are
poised to reduce the incidence of persistent peers, giving rise to a need for a
protocol that guarantees stability with non-persistent peers. We propose a
novel peer-to-peer protocol, the group suppression protocol, to ensure the
stability of peer-to-peer networks under the scenario that all the peers adopt
non-persistent behavior. Using a suitable Lyapunov potential function, the
group suppression protocol is proven to be stable when the file is broken into
two pieces, and detailed experiments demonstrate the stability of the protocol
for arbitrary number of pieces. We define and simulate a decentralized version
of this protocol for practical applications. Straightforward incorporation of
the group suppression protocol into BitTorrent while retaining most of
BitTorrent's core mechanisms is also presented. Subsequent simulations show
that under certain assumptions, BitTorrent with the official protocol cannot
escape from the missing piece syndrome, but BitTorrent with group suppression
does.Comment: There are only a couple of minor changes in this version. Simulation
tool is specified this time. Some repetitive figures are remove
- …