16 research outputs found

    Stochastic Dynamic Cache Partitioning for Encrypted Content Delivery

    Full text link
    In-network caching is an appealing solution to cope with the increasing bandwidth demand of video, audio and data transfer over the Internet. Nonetheless, an increasing share of content delivery services adopt encryption through HTTPS, which is not compatible with traditional ISP-managed approaches like transparent and proxy caching. This raises the need for solutions involving both Internet Service Providers (ISP) and Content Providers (CP): by design, the solution should preserve business-critical CP information (e.g., content popularity, user preferences) on the one hand, while allowing for a deeper integration of caches in the ISP architecture (e.g., in 5G femto-cells) on the other hand. In this paper we address this issue by considering a content-oblivious ISP-operated cache. The ISP allocates the cache storage to various content providers so as to maximize the bandwidth savings provided by the cache: the main novelty lies in the fact that, to protect business-critical information, ISPs only need to measure the aggregated miss rates of the individual CPs and do not need to be aware of the objects that are requested, as in classic caching. We propose a cache allocation algorithm based on a perturbed stochastic subgradient method, and prove that the algorithm converges close to the allocation that maximizes the overall cache hit rate. We use extensive simulations to validate the algorithm and to assess its convergence rate under stationary and non-stationary content popularity. Our results (i) testify the feasibility of content-oblivious caches and (ii) show that the proposed algorithm can achieve within 10\% from the global optimum in our evaluation

    A unified approach to the performance analysis of caching systems

    Get PDF
    We propose a unified methodology to analyse the performance of caches (both isolated and interconnected), by extending and generalizing a decoupling technique originally known as Che's approximation, which provides very accurate results at low computational cost. We consider several caching policies, taking into account the effects of temporal locality. In the case of interconnected caches, our approach allows us to do better than the Poisson approximation commonly adopted in prior work. Our results, validated against simulations and trace-driven experiments, provide interesting insights into the performance of caching systems.Comment: in ACM TOMPECS 20016. Preliminary version published at IEEE Infocom 201

    Impact of Traffic Characteristics on Request Aggregation in an NDN Router

    Get PDF
    The paper revisits the performance evaluation of caching in a Named Data Networking (NDN) router where the content store (CS) is supplemented by a pending interest table (PIT). The PIT aggregates requests for a given content that arrive within the download delay and thus brings an additional reduction in upstream bandwidth usage beyond that due to CS hits. We extend prior work on caching with non-zero download delay (non-ZDD) by proposing a novel mathematical framework that is more easily applicable to general traffic models and by considering alternative cache insertion policies. Specifically we evaluate the use of an LRU filter to improve CS hit rate performance in this non-ZDD context. We also consider the impact of time locality in demand due to finite content lifetimes. The models are used to quantify the impact of the PIT on upstream bandwidth reduction, demonstrating notably that this is significant only for relatively small content catalogues or high average request rate per content. We further explore how the effectiveness of the filter with finite content lifetimes depends on catalogue size and traffic intensity

    Timelines are Publisher-Driven Caches: Analyzing and Shaping Timeline Networks

    Get PDF
    International audienceCache networks are one of the building blocks of information centric networks (ICNs). Most of the recent work on cache networks has focused on networks of request driven caches, which are populated based on users requests for content generated by publishers. However, user generated content still poses the most pressing challenges. For such content time-lines are the de facto sharing solution. In this paper, we establish a connection between time-lines and publisher-driven caches. We propose simple models and metrics to analyze publisher-driven caches, allowing for variable-sized objects. Then, we design two efficient algorithms for timeline workload shaping, leveraging admission and price control in order, for instance, to aid service providers to attain prescribed service level agreements

    Implicit Coordination of Caches in Small Cell Networks under Unknown Popularity Profiles

    Get PDF
    We focus on a dense cellular network, in which a limited-size cache is available at every Base Station (BS). In order to optimize the overall performance of the system in such scenario, where a significant fraction of the users is covered by several BSs, a tight coordination among nearby caches is needed. To this end, this pape introduces a class of simple and fully distributed caching policies, which require neither direct communication among BSs, nor a priori knowledge of content popularity. Furthermore, we propose a novel approximate analytical methodology to assess the performance of interacting caches under such policies. Our approach builds upon the well known characteristic time approximation and provides predictions that are surprisingly accurate (hardly distinguishable from the simulations) in most of the scenarios. Both synthetic and trace-driven results show that the our caching policies achieve excellent performance (in some cases provably optimal). They outperform state-of-the-art dynamic policies for interacting caches, and, in some cases, also the greedy content placement, which is known to be the best performing polynomial algorithm under static and perfectly-known content popularity profiles
    corecore