2,583 research outputs found

    Edge-Caching Wireless Networks: Performance Analysis and Optimization

    Get PDF
    Edge-caching has received much attention as an efficient technique to reduce delivery latency and network congestion during peak-traffic times by bringing data closer to end users. Existing works usually design caching algorithms separately from physical layer design. In this paper, we analyse edge-caching wireless networks by taking into account the caching capability when designing the signal transmission. Particularly, we investigate multi-layer caching where both base station (BS) and users are capable of storing content data in their local cache and analyse the performance of edge-caching wireless networks under two notable uncoded and coded caching strategies. Firstly, we propose a coded caching strategy that is applied to arbitrary values of cache size. The required backhaul and access rates are derived as a function of the BS and user cache size. Secondly, closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Based on the derived formulas, the system EE is maximized via precoding vectors design and optimization while satisfying a predefined user request rate. Thirdly, two optimization problems are proposed to minimize the content delivery time for the two caching strategies. Finally, numerical results are presented to verify the effectiveness of the two caching methods.Comment: to appear in IEEE Trans. Wireless Commu

    On distributed mobile edge computing

    Get PDF
    Mobile Cloud Computing (MCC) has been proposed to offload the workloads of mobile applications from mobile devices to the cloud in order to not only reduce energy consumption of mobile devices but also accelerate the execution of mobile applications. Owing to the long End-to-End (E2E) delay between mobile devices and the cloud, offloading the workloads of many interactive mobile applications to the cloud may not be suitable. That is, these mobile applications require a huge amount of computing resources to process their workloads as well as a low E2E delay between mobile devices and computing resources, which cannot be satisfied by the current MCC technology. In order to reduce the E2E delay, a novel cloudlet network architecture is proposed to bring the computing and storage resources from the remote cloud to the mobile edge. In the cloudlet network, each mobile user is associated with a specific Avatar (i.e., a dedicated Virtual Machine (VM) providing computing and storage resources to its mobile user) in the nearby cloudlet via its associated Base Station (BS). Thus, mobile users can offload their workloads to their Avatars with low E2E delay (i.e., one wireless hop). However, mobile users may roam among BSs in the mobile network, and so the E2E delay between mobile users and their Avatars may become worse if the Avatars remain in their original cloudlets. Thus, Avatar handoff is proposed to migrate an Avatar from one cloudlet into another to reduce the E2E delay between the Avatar and its mobile user. The LatEncy aware Avatar handDoff (LEAD) algorithm is designed to determine the location of each mobile user\u27s Avatar in each time slot in order to minimize the average E2E delay among all the mobile users and their Avatars. The performance of LEAD is demonstrated via extensive simulations. The cloudlet network architecture not only facilitates mobile users in offloading their computational tasks but also empowers Internet of Things (IoT). Popular IoT resources are proposed to be cached in nearby brokers, which are considered as application layer middleware nodes hosted by cloudlets in the cloudlet network, to reduce the energy consumption of servers. In addition, an Energy Aware and latency guaranteed dynamic reSourcE caching (EASE) strategy is proposed to enable each broker to cache suitable popular resources such that the energy consumption from the servers is minimized and the average delay of delivering the contents of the resources to the corresponding clients is guaranteed. The performance of EASE is demonstrated via extensive simulations. The future work comprises two parts. First, caching popular IoT resources in nearby brokers may incur unbalanced traffic loads among brokers, thus increasing the average delay of delivering the contents of the resources. Thus, how to balance the traffic loads among brokers to speed up IoT content delivery process requires further investigation. Second, drone assisted mobile access network architecture will be briefly investigated to accelerate communications between mobile users and their Avatars

    Pricing and Resource Allocation via Game Theory for a Small-Cell Video Caching System

    Full text link
    Evidence indicates that downloading on-demand videos accounts for a dramatic increase in data traffic over cellular networks. Caching popular videos in the storage of small-cell base stations (SBS), namely, small-cell caching, is an efficient technology for reducing the transmission latency whilst mitigating the redundant transmissions of popular videos over back-haul channels. In this paper, we consider a commercialized small-cell caching system consisting of a network service provider (NSP), several video retailers (VR), and mobile users (MU). The NSP leases its SBSs to the VRs for the purpose of making profits, and the VRs, after storing popular videos in the rented SBSs, can provide faster local video transmissions to the MUs, thereby gaining more profits. We conceive this system within the framework of Stackelberg game by treating the SBSs as a specific type of resources. We first model the MUs and SBSs as two independent Poisson point processes, and develop, via stochastic geometry theory, the probability of the specific event that an MU obtains the video of its choice directly from the memory of an SBS. Then, based on the probability derived, we formulate a Stackelberg game to jointly maximize the average profit of both the NSP and the VRs. Also, we investigate the Stackelberg equilibrium by solving a non-convex optimization problem. With the aid of this game theoretic framework, we shed light on the relationship between four important factors: the optimal pricing of leasing an SBS, the SBSs allocation among the VRs, the storage size of the SBSs, and the popularity distribution of the VRs. Monte-Carlo simulations show that our stochastic geometry-based analytical results closely match the empirical ones. Numerical results are also provided for quantifying the proposed game-theoretic framework by showing its efficiency on pricing and resource allocation.Comment: Accepted to appear in IEEE Journal on Selected Areas in Communications, special issue on Video Distribution over Future Interne

    Non Parametric Distributed Inference in Sensor Networks Using Box Particles Messages

    Get PDF
    This paper deals with the problem of inference in distributed systems where the probability model is stored in a distributed fashion. Graphical models provide powerful tools for modeling this kind of problems. Inspired by the box particle filter which combines interval analysis with particle filtering to solve temporal inference problems, this paper introduces a belief propagation-like message-passing algorithm that uses bounded error methods to solve the inference problem defined on an arbitrary graphical model. We show the theoretic derivation of the novel algorithm and we test its performance on the problem of calibration in wireless sensor networks. That is the positioning of a number of randomly deployed sensors, according to some reference defined by a set of anchor nodes for which the positions are known a priori. The new algorithm, while achieving a better or similar performance, offers impressive reduction of the information circulating in the network and the needed computation times
    corecore