17 research outputs found

    Dynamic Offloading Design in Time-Varying Mobile Edge Networks with Deep Reinforcement Learning Approach

    Full text link
    Mobile edge computing (MEC) is regarded as a promising wireless access architecture to alleviate the intensive computation burden at resource limited mobile terminals (MTs). Allowing the MTs to offload partial tasks to MEC servers could significantly decrease task processing delay. In this study, to minimize the processing delay for a multi-user MEC system, we jointly optimize the local content splitting ratio, the transmission/computation power allocation, and the MEC server selection under a dynamic environment with time-varying task arrivals and wireless channels. The reinforcement learning (RL) technique is utilized to deal with the considered problem. Two deep RL strategies, that is, deep Q-learning network (DQN) and deep deterministic policy gradient (DDPG), are proposed to efficiently learn the offloading policies adaptively. The proposed DQN strategy takes the MEC selection as a unique action while using convex optimization approach to obtain the remaining variables. And the DDPG strategy takes all dynamic variables as actions. Numerical results demonstrates that both proposed strategies perform better than existing schemes. And the DDPG strategy is superior to the DQN strategy as it can learn all variables online although it requires relatively large complexity

    A Decision Framework for Allocation of Constellation-Scale Mission Compute Functionality to Ground and Edge Computing

    Get PDF
    This paper explores constellation-scale architectural trades, highlights dominant factors, and presents a decision framework for migrating or sharing mission compute functionality between ground and space segments. Over recent decades, sophisticated logic has been developed for scheduling and tasking of space assets, as well as processing and exploitation of satellite data, and this software has been traditionally hosted in ground computing. Current efforts exist to migrate this software to ground cloud-based services. The option and motivation to host some of this logic “at the edge” within the space segment has arisen as space assets are proliferated, are interlinked via transport networks, and are networked with multi-domain assets. Examples include edge-based Battle Management, Command, Control, and Communications (BMC3) being developed by the Space Development Agency and future onboard computing for commercial constellations. Edge computing pushes workload, computation, and storage closer to data sources and onto devices at the edge of the network. Potential benefits of edge computing include increased speed of response, system reliability, robustness to disrupted networks, and data security. Yet, space-based edge nodes have disadvantages including power and mass limitations, constant physical motion, difficulty of physical access, and potential vulnerability to attacks. This paper presents a structured decision framework with justifying rationale to provide insights and begin to address a key question of what mission compute functionality should be allocated to the space-based edge , and under what mission or architectural conditions, versus to conventional ground-based systems. The challenge is to identify the Pareto-dominant trades and impacts to mission success. This framework will not exhaustively address all missions, architectures, and CONOPs, however it is intended to provide generalized guidelines and heuristics to support architectural decision-making. Via effects-based simulation and analysis, a set of hypotheses about ground- and edge-based architectures are evaluated and summarized along with prior research. Results for a set of key metrics and decision drivers show that edge computing for specific functionality is quantitatively valuable, especially for interoperable, multi-domain, collaborative assets

    Exploiting Non-Causal CPU-State Information for Energy-Efficient Mobile Cooperative Computing

    Full text link
    Scavenging the idling computation resources at the enormous number of mobile devices can provide a powerful platform for local mobile cloud computing. The vision can be realized by peer-to-peer cooperative computing between edge devices, referred to as co-computing. This paper considers a co-computing system where a user offloads computation of input-data to a helper. The helper controls the offloading process for the objective of minimizing the user's energy consumption based on a predicted helper's CPU-idling profile that specifies the amount of available computation resource for co-computing. Consider the scenario that the user has one-shot input-data arrival and the helper buffers offloaded bits. The problem for energy-efficient co-computing is formulated as two sub-problems: the slave problem corresponding to adaptive offloading and the master one to data partitioning. Given a fixed offloaded data size, the adaptive offloading aims at minimizing the energy consumption for offloading by controlling the offloading rate under the deadline and buffer constraints. By deriving the necessary and sufficient conditions for the optimal solution, we characterize the structure of the optimal policies and propose algorithms for computing the policies. Furthermore, we show that the problem of optimal data partitioning for offloading and local computing at the user is convex, admitting a simple solution using the sub-gradient method. Last, the developed design approach for co-computing is extended to the scenario of bursty data arrivals at the user accounting for data causality constraints. Simulation results verify the effectiveness of the proposed algorithms.Comment: Submitted to possible journa
    corecore