1,469 research outputs found
Modeling Data-Plane Power Consumption of Future Internet Architectures
With current efforts to design Future Internet Architectures (FIAs), the
evaluation and comparison of different proposals is an interesting research
challenge. Previously, metrics such as bandwidth or latency have commonly been
used to compare FIAs to IP networks. We suggest the use of power consumption as
a metric to compare FIAs. While low power consumption is an important goal in
its own right (as lower energy use translates to smaller environmental impact
as well as lower operating costs), power consumption can also serve as a proxy
for other metrics such as bandwidth and processor load.
Lacking power consumption statistics about either commodity FIA routers or
widely deployed FIA testbeds, we propose models for power consumption of FIA
routers. Based on our models, we simulate scenarios for measuring power
consumption of content delivery in different FIAs. Specifically, we address two
questions: 1) which of the proposed FIA candidates achieves the lowest energy
footprint; and 2) which set of design choices yields a power-efficient network
architecture? Although the lack of real-world data makes numerous assumptions
necessary for our analysis, we explore the uncertainty of our calculations
through sensitivity analysis of input parameters
Offloading Content with Self-organizing Mobile Fogs
Mobile users in an urban environment access content on the internet from
different locations. It is challenging for the current service providers to
cope with the increasing content demand from a large number of collocated
mobile users. In-network caching to offload content at nodes closer to users
alleviate the issue, though efficient cache management is required to find out
who should cache what, when and where in an urban environment, given nodes
limited computing, communication and caching resources. To address this, we
first define a novel relation between content popularity and availability in
the network and investigate a node's eligibility to cache content based on its
urban reachability. We then allow nodes to self-organize into mobile fogs to
increase the distributed cache and maximize content availability in a
cost-effective manner. However, to cater rational nodes, we propose a coalition
game for the nodes to offer a maximum "virtual cache" assuming a monetary
reward is paid to them by the service/content provider. Nodes are allowed to
merge into different spatio-temporal coalitions in order to increase the
distributed cache size at the network edge. Results obtained through
simulations using realistic urban mobility trace validate the performance of
our caching system showing a ratio of 60-85% of cache hits compared to the
30-40% obtained by the existing schemes and 10% in case of no coalition
Cooperative announcement-based caching for video-on-demand streaming
Recently, video-on-demand (VoD) streaming services like Netflix and Hulu have gained a lot of popularity. This has led to a strong increase in bandwidth capacity requirements in the network. To reduce this network load, the design of appropriate caching strategies is of utmost importance. Based on the fact that, typically, a video stream is temporally segmented into smaller chunks that can be accessed and decoded independently, cache replacement strategies have been developed that take advantage of this temporal structure in the video. In this paper, two caching strategies are proposed that additionally take advantage of the phenomenon of binge watching, where users stream multiple consecutive episodes of the same series, reported by recent user behavior studies to become the everyday behavior. Taking into account this information allows us to predict future segment requests, even before the video playout has started. Two strategies are proposed, both with a different level of coordination between the caches in the network. Using a VoD request trace based on binge watching user characteristics, the presented algorithms have been thoroughly evaluated in multiple network topologies with different characteristics, showing their general applicability. It was shown that in a realistic scenario, the proposed election-based caching strategy can outperform the state-of-the-art by 20% in terms of cache hit ratio while using 4% less network bandwidth
Cache policies for cloud-based systems: To keep or not to keep
In this paper, we study cache policies for cloud-based caching. Cloud-based
caching uses cloud storage services such as Amazon S3 as a cache for data items
that would have been recomputed otherwise. Cloud-based caching departs from
classical caching: cloud resources are potentially infinite and only paid when
used, while classical caching relies on a fixed storage capacity and its main
monetary cost comes from the initial investment. To deal with this new context,
we design and evaluate a new caching policy that minimizes the overall cost of
a cloud-based system. The policy takes into account the frequency of
consumption of an item and the cloud cost model. We show that this policy is
easier to operate, that it scales with the demand and that it outperforms
classical policies managing a fixed capacity.Comment: Proceedings of IEEE International Conference on Cloud Computing 2014
(CLOUD 14
Energy Saving Techniques for Phase Change Memory (PCM)
In recent years, the energy consumption of computing systems has increased
and a large fraction of this energy is consumed in main memory. Towards this,
researchers have proposed use of non-volatile memory, such as phase change
memory (PCM), which has low read latency and power; and nearly zero leakage
power. However, the write latency and power of PCM are very high and this,
along with limited write endurance of PCM present significant challenges in
enabling wide-spread adoption of PCM. To address this, several
architecture-level techniques have been proposed. In this report, we review
several techniques to manage power consumption of PCM. We also classify these
techniques based on their characteristics to provide insights into them. The
aim of this work is encourage researchers to propose even better techniques for
improving energy efficiency of PCM based main memory.Comment: Survey, phase change RAM (PCRAM
Mobile-Based Video Caching Architecture Based on Billboard Manager
Video streaming services are very popular today. Increasingly, users can now
access multimedia applications and video playback wirelessly on their mobile
devices. However, a significant challenge remains in ensuring smooth and
uninterrupted transmission of almost any size of video file over a 3G network,
and as quickly as possible in order to optimize bandwidth consumption. In this
paper, we propose to position our Billboard Manager to provide an optimal
transmission rate to enable smooth video playback to a mobile device user
connected to a 3G network. Our work focuses on serving user requests by mobile
operators from cached resource managed by Billboard Manager, and transmitting
the video files from this pool. The aim is to reduce the load placed on
bandwidth resources of a mobile operator by routing away as much user requests
away from the internet for having to search a video and, subsequently, if
located, have it transferred back to the user.Comment: 8 pages, 1 figure, GridCom-201
Catalog Dynamics: Impact of Content Publishing and Perishing on the Performance of a LRU Cache
The Internet heavily relies on Content Distribution Networks and transparent
caches to cope with the ever-increasing traffic demand of users. Content,
however, is essentially versatile: once published at a given time, its
popularity vanishes over time. All requests for a given document are then
concentrated between the publishing time and an effective perishing time.
In this paper, we propose a new model for the arrival of content requests,
which takes into account the dynamical nature of the content catalog. Based on
two large traffic traces collected on the Orange network, we use the
semi-experimental method and determine invariants of the content request
process. This allows us to define a simple mathematical model for content
requests; by extending the so-called "Che approximation", we then compute the
performance of a LRU cache fed with such a request process, expressed by its
hit ratio. We numerically validate the good accuracy of our model by comparison
to trace-based simulation.Comment: 13 Pages, 9 figures. Full version of the article submitted to the ITC
2014 conference. Small corrections in the appendix from the previous versio
- …