11 research outputs found

    Design and Evaluation of the Optimal Cache Allocation for Content-Centric Networking

    Get PDF
    Content-centric networking (CCN) is a promising framework to rebuild the Internet's forwarding substrate around the concept of content. CCN advocates ubiquitous in-network caching to enhance content delivery, and thus each router has storage space to cache frequently requested content. In this work, we focus on the cache allocation problem, namely, how to distribute the cache capacity across routers under a constrained total storage budget for the network. We first formulate this problem as a content placement problem and obtain the optimal solution by a two-step method. We then propose a suboptimal heuristic method based on node centrality, which is more practical in dynamic networks with frequent content publishing. We investigate through simulations the factors that affect the optimal cache allocation, and perhaps more importantly we use a real-life Internet topology and video access logs from a large scale Internet video provider to evaluate the performance of various cache allocation methods. We observe that network topology and content popularity are two important factors that affect where exactly should cache capacity be placed. Further, the heuristic method comes with only a very limited performance penalty compared to the optimal allocation. Finally, using our findings, we provide recommendations for network operators on the best deployment of CCN caches capacity over routers

    A Content-based Centrality Metric for Collaborative Caching in Information-Centric Fogs

    Get PDF
    Information-Centric Fog Computing enables a multitude of nodes near the end-users to provide storage, communication, and computing, rather than in the cloud. In a fog network, nodes connect with each other directly to get content locally whenever possible. As the topology of the network directly influences the nodes' connectivity, there has been some work to compute the graph centrality of each node within that network topology. The centrality is then used to distinguish nodes in the fog network, or to prioritize some nodes over others to participate in the caching fog. We argue that, for an Information-Centric Fog Computing approach, graph centrality is not an appropriate metric. Indeed, a node with low connectivity that caches a lot of content may provide a very valuable role in the network. To capture this, we introduce acontent-based centrality (CBC) metric which takes into account how well a node is connected to the content the network is delivering, rather than to the other nodes in the network. To illustrate the validity of considering content-based centrality, we use this new metric for a collaborative caching algorithm. We compare the performance of the proposed collaborative caching with typical centrality based, non-centrality based, and non-collaborative caching mechanisms. Our simulation implements CBC on three instances of large scale realistic network topology comprising 2,896 nodes with three content replication levels. Results shows that CBC outperforms benchmark caching schemes and yields a roughly 3x improvement for the average cache hit rate

    Offloading Content with Self-organizing Mobile Fogs

    Get PDF
    Mobile users in an urban environment access content on the internet from different locations. It is challenging for the current service providers to cope with the increasing content demand from a large number of collocated mobile users. In-network caching to offload content at nodes closer to users alleviate the issue, though efficient cache management is required to find out who should cache what, when and where in an urban environment, given nodes limited computing, communication and caching resources. To address this, we first define a novel relation between content popularity and availability in the network and investigate a node's eligibility to cache content based on its urban reachability. We then allow nodes to self-organize into mobile fogs to increase the distributed cache and maximize content availability in a cost-effective manner. However, to cater rational nodes, we propose a coalition game for the nodes to offer a maximum "virtual cache" assuming a monetary reward is paid to them by the service/content provider. Nodes are allowed to merge into different spatio-temporal coalitions in order to increase the distributed cache size at the network edge. Results obtained through simulations using realistic urban mobility trace validate the performance of our caching system showing a ratio of 60-85% of cache hits compared to the 30-40% obtained by the existing schemes and 10% in case of no coalition

    Reversing The Meaning of Node Connectivity for Content Placement in Networks of Caches

    Full text link
    It is a widely accepted heuristic in content caching to place the most popular content at the nodes that are the best connected. The other common heuristic is somewhat contradictory, as it places the most popular content at the edge, at the caching nodes nearest the users. We contend that neither policy is best suited for caching content in a network and propose a simple alternative that places the most popular content at the least connected node. Namely, we populate content first at the nodes that have the lowest graph centrality over the network topology. Here, we provide an analytical study of this policy over some simple topologies that are tractable, namely regular grids and trees. Our mathematical results demonstrate that placing popular content at the least connected nodes outperforms the aforementioned alternatives in typical conditions

    Cost-Aware Optimisation of Cache Allocation for Information-Centric Networking

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.Information-centric networking (ICN) is an emerging paradigm that decouples content from the host to achieve fast and cost-efficient communication and content distribution in the future Internet. A key feature of ICN is the deployment of ubiquitous in-network caching to speed up service delivery and improve network resource utilisation. ICN caching has been widely studied in terms of caching strategies and caching performance. However, the economic aspect of ICN has received marginal consideration so far, although it is vital to understand the potential cost-efficiency of ICN before its wide deployment in service provider network. To address this issue, we propose a cost-aware caching scheme to study the Quality-of-Service (QoS) and cost of ICN and investigate the inner association between them. Two new models are designed to characterise the cost and QoS of ICN with arbitrary topology under heterogeneous bursty content requests. A multi-objective evolution algorithm is adopted to find the optimal cache resource allocation. Numerical results show the effectiveness of the proposed scheme in achieving cost-efficiency and QoS guarantee in ICN caching

    A Popularity-aware Centrality Metric for Content Placement in Information Centric Networks

    Get PDF
    International audienceInformation-centric networks enables a multitude of nodes, in particular near the end-users, to provide storage and communication. At the edge, nodes can connect with each other directly to get content locally whenever possible. As the topology of the network directly influences the nodes' connectivity, there has been some work to compute the graph centrality of each node within the topology of the edge network. The centrality is then used to distinguish nodes at the edge of the network. We argue that, for a network with caches, graph centrality is not an appropriate metric. Indeed, a node with low connectivity (and thereby low centrality) that caches a lot of content may provide a very valuable role in the network. To capture this, we introduce a popularity-weighted content-based centrality (P-CBC) metric which takes into account how well a node is connected to the content the network is delivering, rather than to the other nodes in the network. To illustrate the validity of considering content-based centrality, we use this new metric for a collaborative caching algorithm. We compare the performance of the proposed collaborative caching with typical centrality based, non-centrality based, and non-collaborative caching mechanisms. Our simulation implements P-CBC on three random instances of large scale realistic network topology comprising 2, 896 nodes with three content replication levels. Results shows that P-CBC outperforms benchmark caching schemes and yields a roughly 3x improvement for the average cache hit rate

    Performance Analysis and Optimisation of In-network Caching for Information-Centric Future Internet

    Get PDF
    The rapid development in wireless technologies and multimedia services has radically shifted the major function of the current Internet from host-centric communication to service-oriented content dissemination, resulting a mismatch between the protocol design and the current usage patterns. Motivated by this significant change, Information-Centric Networking (ICN), which has been attracting ever-increasing attention from the communication networks research community, has emerged as a new clean-slate networking paradigm for future Internet. Through identifying and routing data by unified names, ICN aims at providing natural support for efficient information retrieval over the Internet. As a crucial characteristic of ICN, in-network caching enables users to efficiently access popular contents from on-path routers equipped with ubiquitous caches, leading to the enhancement of the service quality and reduction of network loads. Performance analysis and optimisation has been and continues to be key research interests of ICN. This thesis focuses on the development of efficient and accurate analytical models for the performance evaluation of ICN caching and the design of optimal caching management schemes under practical network configurations. This research starts with the proposition of a new analytical model for caching performance under the bursty multimedia traffic. The bursty characteristic is captured and the closed formulas for cache hit ratio are derived. To investigate the impact of topology and heterogeneous caching parameters on the performance, a comprehensive analytical model is developed to gain valuable insight into the caching performance with heterogeneous cache sizes, service intensity and content distribution under arbitrary topology. The accuracy of the proposed models is validated by comparing the analytical results with those obtained from extensive simulation experiments. The analytical models are then used as cost-efficient tools to investigate the key network and content parameters on the performance of caching in ICN. Bursty traffic and heterogeneous caching features have significant influence on the performance of ICN. Therefore, in order to obtain optimal performance results, a caching resource allocation scheme, which leverages the proposed model and targets at minimising the total traffic within the network and improving hit probability at the nodes, is proposed. The performance results reveal that the caching allocation scheme can achieve better caching performance and network resource utilisation than the default homogeneous and random caching allocation strategy. To attain a thorough understanding of the trade-off between the economic aspect and service quality, a cost-aware Quality-of-Service (QoS) optimisation caching mechanism is further designed aiming for cost-efficiency and QoS guarantee in ICN. A cost model is proposed to take into account installation and operation cost of ICN under a realistic ISP network scenario, and a QoS model is presented to formulate the service delay and delay jitter in the presence of heterogeneous service requirements and general probabilistic caching strategy. Numerical results show the effectiveness of the proposed mechanism in achieving better service quality and lower network cost. In this thesis, the proposed analytical models are used to efficiently and accurately evaluate the performance of ICN and investigate the key performance metrics. Leveraging the insights discovered by the analytical models, the proposed caching management schemes are able to optimise and enhance the performance of ICN. To widen the outcomes achieved in the thesis, several interesting yet challenging research directions are pointed out

    Paralelizando unidades de cache hierárquicas para roteadores ICN

    Get PDF
    Um desafio fundamental em ICN (do inglês Information-Centric Networking) é desenvolver Content Stores (ou seja, unidades de cache) que satisfaçam três requisitos: espaço de armazenamento grande, velocidade de operação rápida e custo acessível. A chamada Hierarchical Content Store (HCS) é uma abordagem promissora para atender a esses requisitos. Ela explora a correlação temporal entre requisições para prever futuras solicitações. Por exemplo, assume-se que um usuário que solicita o primeiro minuto de um filme também solicitará o segundo minuto. Teoricamente, essa premissa permitiria transferir proativamente conteúdos de uma área de cache relativamente grande, mas lenta (Layer 2 - L2), para uma área de cache mais rápida, porém menor (Layer 1 - L1). A estrutura hierárquica tem potencial para incrementar o desempenho da CS em uma ordem de grandeza tanto em termos de vazão como de tamanho, mantendo o custo. Contudo, o desenvolvimento de HCS apresenta diversos desafios práticos. É necessário acoplar as hierarquias de memória L2 e L1 considerando as suas taxas de transferência e tamanhos, que dependem tanto de aspectos de hardware (por exemplo, taxa de leitura da L2, uso de múltiplos SSD físicos em paralelo, velocidade de barramento, etc.), como de software (por exemplo, controlador do SSD, gerenciamento de memória, etc.). Nesse contexto, esta tese apresenta duas contribuições principais. Primeiramente, é proposta uma arquitetura para superar os gargalos inerentes ao sistema através da paralelização de múltiplas HCS. Em resumo, o esquema proposto supera desafios inerentes à concorrência (especificamente, sincronismo) através do particionamento determinístico das requisições de conteúdos entre múltiplas threads. Em segundo lugar, é proposta uma metodologia para investigar o desenvolvimento de HCS explorando técnicas de emulação e modelagem analítica conjuntamente. A metodologia proposta apresenta vantagens em relação a metodologias baseadas em prototipação e simulação. A L2 é emulada para viabilizar a investigação de uma variedade de cenários de contorno (tanto em termos de hardware como de software) maior do que seria possível através de prototipação (considerando as tecnologias atuais). Além disso, a emulação emprega código real de um protótipo para os outros componentes do HCS (por exemplo L1, gerência das camadas e API) para fornecer resultados mais realistas do que seriam obtidos através de simulação.A key challenge in Information Centric Networking (ICN) is to develop cache units (also called Content Store - CS) that meet three requirements: large storage space, fast operation, and affordable cost. The so-called HCS (Hierarchical Content Store) is a promising approach to satisfy these requirements jointly. It explores the correlation between content requests to predict future demands. Theoretically, this idea would enable proactively content transfers from a relatively large but slow cache area (Layer 2 - L2) to a faster but smaller cache area (Layer 1 - L1). Thereby, it would be possible to increase the throughput and size of CS in one order of magnitude, while keeping the cost. However, the development of HCS introduces several practical challenges. HCS requires a careful coupling of L2 and L1 memory levels considering their transfer rates and sizes. This requirement depends on both hardware specifications (e.g., read rate L2, use of multiple physical SSD in parallel, bus speed, etc.), and software aspects (e.g., the SSD controller, memory management, etc.). In this context, this thesis presents two main contributions. First, we propose an architecture for overcoming the HCS bottlenecks by parallelizing multiple HCS. In summary, the proposed scheme overcomes racing condition related challenges through deterministic partitioning of content requests among multiple threads. Second, we propose a methodology to investigate the development of HCS exploiting emulation techniques and analytical modeling jointly. The proposed methodology offers advantages over prototyping and simulation-based methods. We emulate the L2 to enable the investigation of a variety of boundary scenarios that are richer (regarding both hardware and software aspects) than would be possible through prototyping (considering current technologies). Moreover, the emulation employs real code from a prototype for the other components of the HCS (e.g., L1, layers management and API) to provide more realistic results than would be obtained through simulation

    Optimizing Resource Allocation with Energy Efficiency and Backhaul Challenges

    Get PDF
    To meet the requirements of future wireless mobile communication which aims to increase the data rates, coverage and reliability while reducing energy consumption and latency, and also deal with the explosive mobile traffic growth which imposes high demands on backhaul for massive content delivery, developing green communication and reducing the backhaul requirements have become two significant trends. One of the promising techniques to provide green communication is wireless power transfer (WPT) which facilitates energy-efficient architectures, e.g. simultaneous wireless information and power transfer (SWIPT). Edge caching, on the other side, brings content closer to the users by storing popular content in caches installed at the network edge to reduce peak-time traffic, backhaul cost and latency. In this thesis, we focus on the resource allocation technology for emerging network architectures, i.e. the SWIPT-enabled multiple-antenna systems and cache-enabled cellular systems, to tackle the challenges of limited resources such as insufficient energy supply and backhaul capacity. We start with the joint design of beamforming and power transfer ratios for SWIPT in MISO broadcast channels and MIMO relay systems, respectively, aiming for maximizing the energy efficiency subject to both the Quality of Service (QoS) constraints and energy harvesting constraints. Then move to the content placement optimization for cache-enabled heterogeneous small cell networks so as to minimize the backhaul requirements. In particular, we enable multicast content delivery and cooperative content sharing utilizing maximum distance separable (MDS) codes to provide further caching gains. Both analysis and simulation results are provided throughout the thesis to demonstrate the benefits of the proposed algorithms over the state-of-the-art methods
    corecore