8 research outputs found
Joint Computation Offloading and Prioritized Scheduling in Mobile Edge Computing
With the rapid development of smart phones, enormous amounts of data are generated and usually require intensive and real-time computation. Nevertheless, quality of service (QoS) is hardly to be met due to the tension between resourcelimited (battery, CPU power) devices and computation-intensive applications. Mobileedge computing (MEC) emerging as a promising technique can be used to copy with stringent requirements from mobile applications. By offloading computationally intensive workloads to edge server and applying efficient task scheduling, energy cost of mobiles could be significantly reduced and therefore greatly improve QoS, e.g., latency. This paper proposes a joint computation offloading and prioritized task scheduling scheme in a multi-user mobile-edge computing system. We investigate an energy minimizing task offloading strategy in mobile devices and develop an effective priority-based task scheduling algorithm with edge server. The execution time, energy consumption, execution cost, and bonus score against both the task data sizes and latency requirement is adopted as the performance metric. Performance evaluation results show that, the proposed algorithm significantly reduce task completion time, edge server VM usage cost, and improve QoS in terms of bonus score. Moreover, dynamic prioritized task scheduling is also discussed herein, results show dynamic thresholds setting realizes the optimal task scheduling. We believe that this work is significant to the emerging mobile-edge computing paradigm, and can be applied to other Internet of Things (IoT)-Edge applications
Multi-user Resource Control with Deep Reinforcement Learning in IoT Edge Computing
By leveraging the concept of mobile edge computing (MEC), massive amount of
data generated by a large number of Internet of Things (IoT) devices could be
offloaded to MEC server at the edge of wireless network for further
computational intensive processing. However, due to the resource constraint of
IoT devices and wireless network, both the communications and computation
resources need to be allocated and scheduled efficiently for better system
performance. In this paper, we propose a joint computation offloading and
multi-user scheduling algorithm for IoT edge computing system to minimize the
long-term average weighted sum of delay and power consumption under stochastic
traffic arrival. We formulate the dynamic optimization problem as an
infinite-horizon average-reward continuous-time Markov decision process (CTMDP)
model. One critical challenge in solving this MDP problem for the multi-user
resource control is the curse-of-dimensionality problem, where the state space
of the MDP model and the computation complexity increase exponentially with the
growing number of users or IoT devices. In order to overcome this challenge, we
use the deep reinforcement learning (RL) techniques and propose a neural
network architecture to approximate the value functions for the post-decision
system states. The designed algorithm to solve the CTMDP problem supports
semi-distributed auction-based implementation, where the IoT devices submit
bids to the BS to make the resource control decisions centrally. Simulation
results show that the proposed algorithm provides significant performance
improvement over the baseline algorithms, and also outperforms the RL
algorithms based on other neural network architectures
Multi-Objective Computation Sharing in Energy and Delay Constrained Mobile Edge Computing Environments
In a mobile edge computing (MEC) network, mobile devices, also called edge clients, offload their computations to multiple edge servers that provide additional computing resources. Since the edge servers are placed at the network edge, transmission delays between edge servers and clients are shorter compared to those of cloud computing. In addition, edge clients can offload their tasks to other nearby edge clients with available computing resources by exploiting the Fog Computing (FC) paradigm. A major challenge in MEC and FC networks is to assign the tasks from edge clients to edge servers, as well as to other edge clients, so that their tasks are completed with minimum energy consumption and processing delay. In this paper, we model task offloading in MEC as a constrained multi-objective optimization problem (CMOP) that minimizes both the energy consumption and task processing delay of the mobile devices. To solve the CMOP, we design an evolutionary algorithm that can efficiently find a representative sample of the best trade-offs between energy consumption and task processing delay, i.e., the Pareto-optimal front. Compared to existing approaches for task offloading in MEC, we see that our approach finds offloading decisions with lower energy consumption and task processing delay
QoE-aware Computation Offloading Scheduling to Capture Energy-Latency Tradeoff in Mobile Clouds
Computation offloading is a promising application of mobile clouds that can save energy of mobile devices via optimal transmission scheduling of mobile-to-cloud task offloading. Existing approaches to computation offloading have addressed various aspects of the tradeoff between energy consumption and application latency, but none of them explicitly considered the dependency in optimization on the mobile user''s context, e.g., user tendency, the remaining battery level. This paper captures such a user-centric perspective in the energy-latency tradeoff via a quality-of-experience (QoE) based cost function, and formulates the problem of data offloading scheduling as dynamic programming (DP). To derive the optimal schedule, we first introduce a database-assisted optimal DP algorithm and then propose a suboptimal but computationally-efficient approximate DP (ADP) algorithm based on the limited lookahead technique. An extensive numerical analysis has revealed that the ADP algorithm achieves near-optimal performance incurring only 2.27% extra cost on average than the optimum, and enhances QoE by up to 4.46 times compared to the energy-only scheduling
Towards More Efficient 5G Networks via Dynamic Traffic Scheduling
Department of Electrical EngineeringThe 5G communications adopt various advanced technologies such as mobile edge computing and unlicensed band operations, to meet the goal of 5G services such as enhanced Mobile Broadband (eMBB) and Ultra Reliable Low Latency Communications (URLLC). Specifically, by placing the cloud resources at the edge of the radio access network, so-called mobile edge cloud, mobile devices can be served with lower latency compared to traditional remote-cloud based services. In addition, by utilizing unlicensed spectrum, 5G can mitigate the scarce spectrum resources problem thus leading to realize higher throughput services.
To enhance user-experienced service quality, however, aforementioned approaches should be more fine-tuned by considering various network performance metrics altogether. For instance, the mechanisms for mobile edge computing, e.g., computation offloading to the edge cloud, should not be optimized in a specific metric's perspective like latency, since actual user satisfaction comes from multi-domain factors including latency, throughput, monetary cost, etc. Moreover, blindly combining unlicensed spectrum resources with licensed ones does not always guarantee the performance enhancement, since it is crucial for unlicensed band operations to achieve peaceful but efficient coexistence with other competing technologies (e.g., Wi-Fi).
This dissertation proposes a focused resource management framework for more efficient 5G network operations as follows. First, Quality-of-Experience is adopted to quantify user satisfaction in mobile edge computing, and the optimal transmission scheduling algorithm is derived to maximize user QoE in computation offloading scenarios. Next, regarding unlicensed band operations, two efficient mechanisms are introduced to improve the coexistence performance between LTE-LAA and Wi-Fi networks. In particular, we develop a dynamic energy-detection thresholding algorithm for LTE-LAA so that LTE-LAA devices can detect Wi-Fi frames in a lightweight way. In addition, we propose AI-based network configuration for an LTE-LAA network with which an LTE-LAA operator can fine-tune its coexistence parameters (e.g., CAA threshold) to better protect coexisting Wi-Fi while achieving enhanced performance than the legacy LTE-LAA in the standards. Via extensive evaluations using computer simulations and a USRP-based testbed, we have verified that the proposed framework can enhance the efficiency of 5G.clos