3,301 research outputs found
An experimental dynamic RAM video cache
As technological advances continue to be made, the demand for more efficient distributed multimedia systems is also affirmed. Current support for end-to-end QoS is still limited; consequently mechanisms are required to provide flexibility in resource loading. One such mechanism, caching, may be introduced both in the end-system and network to facilitate intelligent load balancing and resource management. We introduce new work at Lancaster University investigating the use of transparent network caches for MPEG-2. A novel architecture is proposed, based on router-oriented caching and the employment of large scale dynamic RAM as the sole caching medium. The architecture also proposes the use of the ISO/IEC standardised DSM-CC protocol as a basic control infrastructure and the caching of pre-built transport packets (UDP/IP) in the data plane. Finally, the work discussed is in its infancy and consequently focuses upon the design and implementation of the caching architecture rather than an investigation into performance gains, which we intend to make in a continuation of the work
Performance analysis of a caching algorithm for a catch-up television service
The catch-up TV (CUTV) service allows users to watch video content that was previously broadcast live on TV channels and later placed on an on-line video store. Upon a request from a user to watch a recently missed episode of his/her favourite TV series, the content is streamed from the video server to the customer's receiver device. This requires that an individual flow is set up for the duration of the video, and since it is hard to impossible to employ multicast streaming for this purpose (as users seldomly issue a request for the same episode at the same time), these flows are unicast. In this paper, we demonstrate that with the growing popularity of the CUTV service, the number of simultaneously running unicast flows on the aggregation parts of the network threaten to lead to an unwieldy increase in required bandwidth. Anticipating this problem and trying to alleviate it, the network operators deploy caches in strategic places in the network. We investigate the performance of such a caching strategy and the impact of its size and the cache update logic. We first analyse and model the evolution of video popularity over time based on traces we collected during 10 months. Through simulations we compare the performance of the traditional least-recently used and least-frequently used caching algorithms to our own algorithm. We also compare their performance with a "perfect" caching algorithm, which knows and hence does not have to estimate the video request rates. In the experimental data, we see that the video parameters from the popularity evolution law can be clustered. Therefore, we investigate theoretical models that can capture these clusters and we study the impact of clustering on the caching performance. Finally, some considerations on the optimal cache placement are presented
Cache policies for cloud-based systems: To keep or not to keep
In this paper, we study cache policies for cloud-based caching. Cloud-based
caching uses cloud storage services such as Amazon S3 as a cache for data items
that would have been recomputed otherwise. Cloud-based caching departs from
classical caching: cloud resources are potentially infinite and only paid when
used, while classical caching relies on a fixed storage capacity and its main
monetary cost comes from the initial investment. To deal with this new context,
we design and evaluate a new caching policy that minimizes the overall cost of
a cloud-based system. The policy takes into account the frequency of
consumption of an item and the cloud cost model. We show that this policy is
easier to operate, that it scales with the demand and that it outperforms
classical policies managing a fixed capacity.Comment: Proceedings of IEEE International Conference on Cloud Computing 2014
(CLOUD 14
A Bayesian Poisson-Gaussian Process Model for Popularity Learning in Edge-Caching Networks
Edge-caching is recognized as an efficient technique for future cellular
networks to improve network capacity and user-perceived quality of experience.
To enhance the performance of caching systems, designing an accurate content
request prediction algorithm plays an important role. In this paper, we develop
a flexible model, a Poisson regressor based on a Gaussian process, for the
content request distribution.
The first important advantage of the proposed model is that it encourages the
already existing or seen contents with similar features to be correlated in the
feature space and therefore it acts as a regularizer for the estimation.
Second, it allows to predict the popularities of newly-added or unseen contents
whose statistical data is not available in advance. In order to learn the model
parameters, which yield the Poisson arrival rates or alternatively the content
\textit{popularities}, we invoke the Bayesian approach which is robust against
over-fitting.
However, the resulting posterior distribution is analytically intractable to
compute. To tackle this, we apply a Markov Chain Monte Carlo (MCMC) method to
approximate this distribution which is also asymptotically exact. Nevertheless,
the MCMC is computationally demanding especially when the number of contents is
large. Thus, we employ the Variational Bayes (VB) method as an alternative low
complexity solution. More specifically, the VB method addresses the
approximation of the posterior distribution through an optimization problem.
Subsequently, we present a fast block-coordinate descent algorithm to solve
this optimization problem. Finally, extensive simulation results both on
synthetic and real-world datasets are provided to show the accuracy of our
prediction algorithm and the cache hit ratio (CHR) gain compared to existing
methods from the literature
- âŚ