56,188 research outputs found

    On Resource Pooling and Separation for LRU Caching

    Full text link
    Caching systems using the Least Recently Used (LRU) principle have now become ubiquitous. A fundamental question for these systems is whether the cache space should be pooled together or divided to serve multiple flows of data item requests in order to minimize the miss probabilities. In this paper, we show that there is no straight yes or no answer to this question, depending on complex combinations of critical factors, including, e.g., request rates, overlapped data items across different request flows, data item popularities and their sizes. Specifically, we characterize the asymptotic miss probabilities for multiple competing request flows under resource pooling and separation for LRU caching when the cache size is large. Analytically, we show that it is asymptotically optimal to jointly serve multiple flows if their data item sizes and popularity distributions are similar and their arrival rates do not differ significantly; the self-organizing property of LRU caching automatically optimizes the resource allocation among them asymptotically. Otherwise, separating these flows could be better, e.g., when data sizes vary significantly. We also quantify critical points beyond which resource pooling is better than separation for each of the flows when the overlapped data items exceed certain levels. Technically, we generalize existing results on the asymptotic miss probability of LRU caching for a broad class of heavy-tailed distributions and extend them to multiple competing flows with varying data item sizes, which also validates the Che approximation under certain conditions. These results provide new insights on improving the performance of caching systems

    Revisiting Resource Pooling: The Case for In-Network Resource Sharing.

    Get PDF
    We question the widely adopted view of in-network caches acting as temporary storage for the most popular content in Information-Centric Networks (ICN). Instead, we propose that in-network storage is used as a place of temporary custody for incoming content in a store and forward manner. Given this functionality of in-network storage, senders push content into the network in an open-loop manner to take advantage of underutilised links. When content hits the bottleneck link it gets re-routed through alternative uncongested paths. If alternative paths do not exist, incoming content is temporarily stored in in-network caches, while the system enters a closed-loop, back-pressure mode of operation to avoid congestive collapse. Our proposal follows in spirit the resource pooling principle, which, however, is restricted to end-to-end resources and paths. We extend this principle to also take advantage of in-network resources, in terms of multiplicity of available sub-paths (as compared to multihomed users only) and in-network cache space. We call the proposed principle In-Network Resource Pooling Principle (INRPP). Using the INRPP, congestion, or increased contention over a link, is dealt with locally in a hop-by-hop manner, instead of end-to-end. INRPP utilises resources throughout the network more efficiently and opens up new directions for research in the multipath routing and congestion control areas

    Efficiency evaluation for pooling resources in health care

    Get PDF
    Hospitals traditionally segregate resources into centralized functional departments such as diagnostic departments, ambulatory care centers, and nursing wards. In recent years this organizational model has been challenged by the idea that higher quality of care and efficiency in service delivery can be achieved when services are organized around patient groups. Examples include specialized clinics for breast cancer patients and clinical pathways for diabetes patients. Hospitals are struggling with the question of whether to become more centralized to achieve economies of scale or more decentralized to achieve economies of focus. In this paper we examine service and patient group characteristics to study the conditions where a centralized model is more efficient, and conversely, where a decentralized model is more efficient. This relationship is examined analytically with a queuing model to determine themost influential factors and then with simulation to fine-tune the results. The tradeoffs between economies of scale and economies of focus measured by these models are used to derive general management guidelines

    Efficiency evaluation for pooling resources in health care

    Get PDF
    Hospitals traditionally segregate resources into centralized functional departments such as diagnostic departments, ambulatory care centres, and nursing wards. In recent years this organizational model has been challenged by the idea that higher quality of care and efficiency in service delivery can be achieved when services are organized around patient groups. Examples include specialized clinics for breast cancer patients and clinical pathways for diabetes patients. Hospitals are struggling with the question of whether to become more centralized to achieve economies of scale or more decentralized to achieve economies of focus. Using quantitative Queueing Theory and Simulation models, we examine service and patient group characteristics to determine the conditions where a centralized model is more efficient and conversely where a decentralized model is more efficient. The results from the model measure the tradeoffs between economies of scale and economies of focus from which management guidelines are derived
    corecore