475 research outputs found

    Offloading Content with Self-organizing Mobile Fogs

    Get PDF
    Mobile users in an urban environment access content on the internet from different locations. It is challenging for the current service providers to cope with the increasing content demand from a large number of collocated mobile users. In-network caching to offload content at nodes closer to users alleviate the issue, though efficient cache management is required to find out who should cache what, when and where in an urban environment, given nodes limited computing, communication and caching resources. To address this, we first define a novel relation between content popularity and availability in the network and investigate a node's eligibility to cache content based on its urban reachability. We then allow nodes to self-organize into mobile fogs to increase the distributed cache and maximize content availability in a cost-effective manner. However, to cater rational nodes, we propose a coalition game for the nodes to offer a maximum "virtual cache" assuming a monetary reward is paid to them by the service/content provider. Nodes are allowed to merge into different spatio-temporal coalitions in order to increase the distributed cache size at the network edge. Results obtained through simulations using realistic urban mobility trace validate the performance of our caching system showing a ratio of 60-85% of cache hits compared to the 30-40% obtained by the existing schemes and 10% in case of no coalition

    Reversing The Meaning of Node Connectivity for Content Placement in Networks of Caches

    Full text link
    It is a widely accepted heuristic in content caching to place the most popular content at the nodes that are the best connected. The other common heuristic is somewhat contradictory, as it places the most popular content at the edge, at the caching nodes nearest the users. We contend that neither policy is best suited for caching content in a network and propose a simple alternative that places the most popular content at the least connected node. Namely, we populate content first at the nodes that have the lowest graph centrality over the network topology. Here, we provide an analytical study of this policy over some simple topologies that are tractable, namely regular grids and trees. Our mathematical results demonstrate that placing popular content at the least connected nodes outperforms the aforementioned alternatives in typical conditions

    Flexpop: A popularity-based caching strategy for multimedia applications in information-centric networking

    Get PDF
    Information-Centric Networking (ICN) is the dominant architecture for the future Internet. In ICN, the content items are stored temporarily in network nodes such as routers. When the memory of routers becomes full and there is no room for a new arriving content, the stored contents are evicted to cope with the limited cache size of the routers. Therefore, it is crucial to develop an effective caching strategy for keeping popular contents for a longer period of time. This study proposes a new caching strategy, named Flexible Popularity-based Caching (FlexPop) for storing popular contents. The FlexPop comprises two mechanisms, i.e., Content Placement Mechanism (CPM), which is responsible for content caching, and Content Eviction Mechanism (CEM) that deals with content eviction when the router cache is full and there is no space for the new incoming content. Both mechanisms are validated using Fuzzy Set Theory, following the Design Research Methodology (DRM) to manifest that the research is rigorous and repeatable under comparable conditions. The performance of FlexPop is evaluated through simulations and the results are compared with those of the Leave Copy Everywhere (LCE), ProbCache, and Most Popular Content (MPC) strategies. The results show that the FlexPop strategy outperforms LCE, ProbCache, and MPC with respect to cache hit rate, redundancy, content retrieval delay, memory utilization, and stretch ratio, which are regarded as extremely important metrics (in various studies) for the evaluation of ICN caching. The outcomes exhibited in this study are noteworthy in terms of making FlexPop acceptable to users as they can verify the performance of ICN before selecting the right caching strategy. Thus FlexPop has potential in the use of ICN for the future Internet such as in deployment of the IoT technology

    Milking the Cache Cow With Fairness in Mind

    Get PDF

    Content, Topology and Cooperation in In-network Caching

    Get PDF
    In-network caching aims at improving content delivery and alleviating pressures on network bandwidth by leveraging universally networked caches. This thesis studies the design of cooperative in-network caching strategy from three perspectives: content, topology and cooperation, specifically focuses on the mechanisms of content delivery and cooperation policy and their impacts on the performance of cache networks. The main contributions of this thesis are twofold. From measurement perspective, we show that the conventional metric hit rate is not sufficient in evaluating a caching strategy on non-trivial topologies, therefore we introduce footprint reduction and coupling factor, which contain richer information. We show cooperation policy is the key in balancing various tradeoffs in caching strategy design, and further investigate the performance impact from content per se via different chunking schemes. From design perspective, we first show different caching heuristics and smart routing schemes can significantly improve the caching performance and facilitate content delivery. We then incorporate well-defined fairness metric into design and derive the unique optimal caching solution on the Pareto boundary with bargaining game framework. In addition, our study on the functional relationship between cooperation overhead and neighborhood size indicates collaboration should be constrained in a small neighborhood due to its cost growing exponentially on general network topologies.Verkonsisäinen välimuistitallennus pyrkii parantamaan sisällöntoimitusta ja helpottamaan painetta verkon siirtonopeudessa hyödyntämällä universaaleja verkottuneita välimuisteja. Tämä väitöskirja tutkii yhteistoiminnallisen verkonsisäisen välimuistitallennuksen suunnittelua kolmesta näkökulmasta: sisällön, topologian ja yhteistyön kautta, erityisesti keskittyen sisällöntoimituksen mekanismeihin ja yhteistyökäytäntöihin sekä näiden vaikutuksiin välimuistiverkkojen performanssiin. Väitöskirjan suurimmat aikaansaannokset ovat kahdella saralla. Mittaamisen näkökulmasta näytämme, että perinteinen metrinen välimuistin osumatarkkuus ei ole riittävä ei-triviaalin välimuistitallennusstrategian arvioinnissa, joten esittelemme parempaa informaatiota sisältävät jalanjäljen pienentämisen sekä yhdistämistekijän. Näytämme, että yhteistyökäytäntö on avain erilaisten välimuistitallennusstrategian suunnitteluun liittyvien kompromissien tasapainotukseen ja tutkimme lisää sisällön erilaisten lohkomisjärjestelmien kautta aiheuttamaa vaikutusta performanssiin. Suunnittelun näkökulmasta näytämme ensin, kuinka erilaiset välimuistitallennuksen heuristiikat ja viisaan reitityksen järjestelmät parantavat merkittävästi välimuistitallennusperformanssia sekä helpottavat sisällön toimitusta. Sisällytämme sitten suunnitteluun hyvin määritellyn oikeudenmukaisuusmittarin ja johdamme uniikin optimaalin välimuistitallennusratkaisun Pareto-rintamalla neuvottelupelin kehyksissä. Lisäksi tutkimuksemme yhteistyökustannusten ja naapurustokoon funktionaalisesta suhteesta viittaa siihen, että yhteistyö on syytä rajoittaa pieneen naapurustoon sen kustannusten kasvaessa eksponentiaalisesti yleisessä verkkotopologiassa

    A review on green caching strategies for next generation communication networks

    Get PDF
    © 2020 IEEE. In recent years, the ever-increasing demand for networking resources and energy, fueled by the unprecedented upsurge in Internet traffic, has been a cause for concern for many service providers. Content caching, which serves user requests locally, is deemed to be an enabling technology in addressing the challenges offered by the phenomenal growth in Internet traffic. Conventionally, content caching is considered as a viable solution to alleviate the backhaul pressure. However, recently, many studies have reported energy cost reductions contributed by content caching in cache-equipped networks. The hypothesis is that caching shortens content delivery distance and eventually achieves significant reduction in transmission energy consumption. This has motivated us to conduct this study and in this article, a comprehensive survey of the state-of-the-art green caching techniques is provided. This review paper extensively discusses contributions of the existing studies on green caching. In addition, the study explores different cache-equipped network types, solution methods, and application scenarios. We categorically present that the optimal selection of the caching nodes, smart resource management, popular content selection, and renewable energy integration can substantially improve energy efficiency of the cache-equipped systems. In addition, based on the comprehensive analysis, we also highlight some potential research ideas relevant to green content caching
    corecore