322 research outputs found

    Joint Planning of Network Slicing and Mobile Edge Computing in 5G Networks

    Full text link
    Multi-access Edge Computing (MEC) facilitates the deployment of critical applications with stringent QoS requirements, latency in particular. Our paper considers the problem of jointly planning the availability of computational resources at the edge, the slicing of mobile network and edge computation resources, and the routing of heterogeneous traffic types to the various slices. These aspects are intertwined and must be addressed together to provide the desired QoS to all mobile users and traffic types still keeping costs under control. We formulate our problem as a mixed-integer nonlinear program (MINLP) and we define a heuristic, named Neighbor Exploration and Sequential Fixing (NESF), to facilitate the solution of the problem. The approach allows network operators to fine tune the network operation cost and the total latency experienced by users. We evaluate the performance of the proposed model and heuristic against two natural greedy approaches. We show the impact of the variation of all the considered parameters (viz., different types of traffic, tolerable latency, network topology and bandwidth, computation and link capacity) on the defined model. Numerical results demonstrate that NESF is very effective, achieving near-optimal planning and resource allocation solutions in a very short computing time even for large-scale network scenarios.Comment: Submitted to IEEE Transactions on Cloud Computin

    Enhanced VIP Algorithms for Forwarding, Caching, and Congestion Control in Named Data Networks

    Full text link
    Emerging Information-Centric Networking (ICN) architectures seek to optimally utilize both bandwidth and storage for efficient content distribution over the network. The Virtual Interest Packet (VIP) framework has been proposed to enable joint design of forwarding, caching, and congestion control strategies within the Named Data Networking (NDN) architecture. While the existing VIP algorithms exhibit good performance, they are primarily focused on maximizing network throughput and utility, and do not explicitly consider user delay. In this paper, we develop a new class of enhanced algorithms for joint dynamic forwarding, caching and congestion control within the VIP framework. These enhanced VIP algorithms adaptively stabilize the network and maximize network utility, while improving the delay performance by intelligently making use of VIP information beyond one hop. Generalizing Lyapunov drift techniques, we prove the throughput optimality and characterize the utility-delay tradeoff of the enhanced VIP algorithms. Numerical experiments demonstrate the superior performance of the resulting enhanced algorithms for handling Interest Packets and Data Packets within the actual plane, in terms of low network delay and high network utility.Comment: 11 pages, 4 figures, to appear in IEEE GLOBECOM 2016. arXiv admin note: text overlap with arXiv:1310.556

    Selfish Caching Games on Directed Graphs

    Full text link
    Caching networks can reduce the routing costs of accessing contents by caching contents closer to users. However, cache nodes may belong to different entities and behave selfishly to maximize their own benefits, which often lead to performance degradation for the overall network. While there has been extensive literature on allocating contents to caches to maximize the social welfare, the analysis of selfish caching behaviors remains largely unexplored. In this paper, we model the selfish behaviors of cache nodes as selfish caching games on arbitrary directed graphs with heterogeneous content popularity. We study the existence of a pure strategy Nash equilibrium (PSNE) in selfish caching games, and analyze its efficiency in terms of social welfare. We show that a PSNE does not always exist in arbitrary-topology caching networks. However, if the network does not have a mixed request loop, i.e., a directed loop in which each edge is traversed by at least one content request, we show that a PSNE always exists and can be found in polynomial time. Furthermore, we can avoid mixed request loops by properly choosing request forwarding paths. We then show that the efficiency of Nash equilibria, captured by the price of anarchy (PoA), can be arbitrarily poor if we allow arbitrary content request patterns, and adding extra cache nodes can make the PoA worse, i.e., cache paradox happens. However, when cache nodes have homogeneous request patterns, we show that the PoA is bounded even allowing arbitrary topologies. We further analyze the selfish caching games for cache nodes with limited computational capabilities, and show that an approximate PSNE exists with bounded PoA in certain cases of interest. Simulation results show that increasing the cache capacity in the network improves the efficiency of Nash equilibria, while adding extra cache nodes can degrade the efficiency of Nash equilibria

    Rate Allocation and Content Placement in Cache Networks

    Full text link
    We introduce the problem of optimal congestion control in cache networks, whereby \emph{both} rate allocations and content placements are optimized \emph{jointly}. We formulate this as a maximization problem with non-convex constraints, and propose solving this problem via (a) a Lagrangian barrier algorithm and (b) a convex relaxation. We prove different optimality guarantees for each of these two algorithms; our proofs exploit the fact that the non-convex constraints of our problem involve DR-submodular functions

    Terra: Scalable Cross-Layer GDA Optimizations

    Full text link
    Geo-distributed analytics (GDA) frameworks transfer large datasets over the wide-area network (WAN). Yet existing frameworks often ignore the WAN topology. This disconnect between WAN-bound applications and the WAN itself results in missed opportunities for cross-layer optimizations. In this paper, we present Terra to bridge this gap. Instead of decoupled WAN routing and GDA transfer scheduling, Terra applies scalable cross-layer optimizations to minimize WAN transfer times for GDA jobs. We present a two-pronged approach: (i) a scalable algorithm for joint routing and scheduling to make fast decisions; and (ii) a scalable, overlay-based enforcement mechanism that avoids expensive switch rule updates in the WAN. Together, they enable Terra to quickly react to WAN uncertainties such as large bandwidth fluctuations and failures in an application-aware manner as well. Integration with the FloodLight SDN controller and Apache YARN, and evaluation on 4 workloads and 3 WAN topologies show that Terra improves the average completion times of GDA jobs by 1.55x-3.43x. GDA jobs running with Terra meets 2.82x-4.29x more deadlines and can quickly react to WAN-level events in an application-aware manner

    How Much Cache is Needed to Achieve Linear Capacity Scaling in Backhaul-Limited Dense Wireless Networks?

    Full text link
    Dense wireless networks are a promising solution to meet the huge capacity demand in 5G wireless systems. However, there are two implementation issues, namely the interference and backhaul issues. To resolve these issues, we propose a novel network architecture called the backhaul-limited cached dense wireless network (C-DWN), where a physical layer (PHY) caching scheme is employed at the base stations (BSs) but only a fraction of the BSs have wired payload backhauls. The PHY caching can replace the role of wired backhauls to achieve both the cache-induced MIMO cooperation gain and cache-assisted Multihopping gain. Two fundamental questions are addressed. Can we exploit the PHY caching to achieve linear capacity scaling with limited payload backhauls? If so, how much cache is needed? We show that the capacity of the backhaul-limited C-DWN indeed scales linearly with the number of BSs if the BS cache size is larger than a threshold that depends on the content popularity. We also quantify the throughput gain due to cache-induced MIMO cooperation over conventional caching schemes (which exploit purely the cached-assisted multihopping). Interestingly, the minimum BS cache size needed to achieve a significant cache-induced MIMO cooperation gain is the same as that needed to achieve the linear capacity scaling.Comment: 14 pages, 8 figures, accepted by IEEE/ACM Transactions on Networkin

    Study and analysis of mobility, security, and caching issues in CCN

    Get PDF
    Existing architecture of Internet is IP-centric, having capability to cope with the needs of the Internet users. Due to the recent advancements and emerging technologies, a need to have ubiquitous connectivity has become the primary focus. Increasing demands for location-independent content raised the requirement of a new architecture and hence it became a research challenge. Content Centric Networking (CCN) paradigm emerges as an alternative to IP-centric model and is based on name-based forwarding and in-network data caching. It is likely to address certain challenges that have not been solved by IP-based protocols in wireless networks. Three important factors that require significant research related to CCN are mobility, security, and caching. While a number of studies have been conducted on CCN and its proposed technologies, none of the studies target all three significant research directions in a single article, to the best of our knowledge. This paper is an attempt to discuss the three factors together within context of each other. In this paper, we discuss and analyze basics of CCN principles with distributed properties of caching, mobility, and secure access control. Different comparisons are made to examine the strengths and weaknesses of each aforementioned aspect in detail. The final discussion aims to identify the open research challenges and some future trends for CCN deployment on a large scale

    Content placement in networks of similarity caches

    Get PDF

    Parallel Simulation of Very Large-Scale General Cache Networks

    Get PDF
    In this paper we propose a methodology for the study of general cache networks, which is intrinsically scalable and amenable to parallel execution. We contrast two techniques: one that slices the network, and another that slices the content catalog. In the former, each core simulates requests for the whole catalog on a subgraph of the original topology, whereas in the latter each core simulates requests for a portion of the original catalog on a replica of the whole network. Interestingly, we find out that when the number of cores increases (and so the split ratio of the network topology), the overhead of message passing required to keeping consistency among nodes actually offsets any benefit from the parallelization: this is strictly due to the correlation among neighboring caches, meaning that requests arriving at one cache allocated on one core may depend on the status of one or more caches allocated on different cores. Even more interestingly, we find out that the newly proposed catalog slicing, on the contrary, achieves an ideal speedup in the number of cores. Overall, our system, which we make available as open source software, enables performance assessment of large scale general cache networks, i.e., comprising hundreds of nodes, trillions contents, and complex routing and caching algorithms, in minutes of CPU time and with exiguous amounts of memory

    Optimal and quasi-optimal energy-efficient storage sharing for opportunistic sensor networks

    Get PDF
    This paper investigates optimum distributed storage techniques for data preservation, and eventual dissemination, in opportunistic heterogeneous wireless sensor networks where data collection is intermittent and exhibits spatio-temporal randomness. The proposed techniques involve optimally sharing the sensor nodes' storage and properly handling the storage traffic such that the buffering capacity of the network approaches its total storage capacity with minimum energy. The paper develops an integer linear programming (ILP) model, analyses the emergence of storage traffic in the network, provides performance bounds, assesses performance sensitivities and develops quasi-optimal decentralized heuristics that can reasonably handle the problem in a practical implementation. These include the Closest Availability (CA) and Storage Gradient (SG) heuristics whose performance is shown to be within only 10% and 6% of the dynamic optimum allocation, respectively
    • …
    corecore