237 research outputs found
Task-Oriented Delay-Aware Multi-Tier Computing in Cell-free Massive MIMO Systems
Multi-tier computing can enhance the task computation by multi-tier computing nodes. In this paper, we propose a cell-free massive multiple-input multiple-output (MIMO) aided computing system by deploying multi-tier computing nodes to improve the computation performance. At first, we investigate the computational latency and the total energy consumption for task computation, regarded as total cost. Then, we formulate a total cost minimization problem to design the bandwidth allocation and task allocation, while considering realistic heterogenous delay requirements of the computational tasks. Due to the binary task allocation variable, the formulated optimization problem is non-convex. Therefore, we solve the bandwidth allocation and task allocation problem by decoupling the original optimization problem into bandwidth allocation and task allocation subproblems. As the bandwidth allocation problem is a convex optimization problem, we first determine the bandwidth allocation for given task allocation strategy, followed by conceiving the traditional convex optimization strategy to obtain the bandwidth allocation solution. Based on the asymptotic property of received signal-to-interference-plus-noise ratio (SINR) under the cell-free massive MIMO setting and bandwidth allocation solution, we formulate a dual problem to solve the task allocation subproblem by relaxing the binary constraint with Lagrange partial relaxation for heterogenous task delay requirements. At last, simulation results are provided to demonstrate that our proposed task offloading scheme performs better than the benchmark schemes, where the minimum-cost optimal offloading strategy for heterogeneous delay requirements of the computational tasks may be controlled by the asymptotic property of the received SINR in our proposed cell-free massive MIMO-aided multi-tier computing systems.This work was supported by the National Key Project under Grant 2020YFB1807700
A joint scheduling and content caching scheme for energy harvesting access points with multicast
© 2017 IEEE. In this work, we investigate a system where users are served by an access point that is equipped with energy harvesting and caching mechanism. Focusing on the design of an efficient content delivery scheduling, we propose a joint scheduling and caching scheme. The scheduling problem is formulated as a Markov decision process and solved by an on-line learning algorithm. To deal with large state space, we apply the linear approximation method to the state-Action value functions, which significantly reduces the memory space for storing the function values. In addition, the preference learning is incorporated to speed up the convergence when dealing with the requests from users that have obvious content preferences. Simulation results confirm that the proposed scheme outperforms the baseline scheme in terms of convergence and system throughput, especially when the personal preference is concentrated to one or two contents
Cyberattack detection in mobile cloud computing: A deep learning approach
© 2018 IEEE. With the rapid growth of mobile applications and cloud computing, mobile cloud computing has attracted great interest from both academia and industry. However, mobile cloud applications are facing security issues such as data integrity, users' confidentiality, and service availability. A preventive approach to such problems is to detect and isolate cyber threats before they can cause serious impacts to the mobile cloud computing system. In this paper, we propose a novel framework that leverages a deep learning approach to detect cyberattacks in mobile cloud environment. Through experimental results, we show that our proposed framework not only recognizes diverse cyberattacks, but also achieves a high accuracy (up to 97.11%) in detecting the attacks. Furthermore, we present the comparisons with current machine learning-based approaches to demonstrate the effectiveness of our proposed solution
Joint Trajectory and Beamforming Optimization for Secure UAV Transmission Aided by IRS
Unmanned aerial vehicle (UAV) communications are susceptible to eavesdropping, and intelligent reflecting surface (IRS) is capable of reconfiguring the propagation environment, thereby facilitating the security for UAV networks. In this paper, we aim to maximize the average secrecy rate for an IRS-assisted UAV network by jointly optimizing the UAV trajectory, the transmit beamforming, and the phase shift of IRS. The complex problem is decomposed into three subproblems and solved via an iterative algorithm. First, the closed-form solution to the active beamforming is derived. Then, the passive beamforming of fractional programming is converted into corresponding parametric sub-problems. Finally, the trajectory problem is reformulated as a convex one by utilizing the successive convex approximation. Simulation results are provided to validate the effectiveness of the proposed scheme
The tradeoff analysis in RF-powered backscatter cognitive radio networks
© 2016 IEEE. In this paper, we introduce a new model for RF-powered cognitive radio networks with the aim to improve the performance for secondary systems. In our proposed model, when the primary channel is busy, the secondary transmitter is able either to backscatter the primary signals to transmit data to the secondary receiver or to harvest RF energy from the channel. The harvested energy then will be used to transmit data to the receiver when the channel becomes idle. We first analyze the tradeoff between backscatter communication and harvest-then-transmit protocol in the network. To maximize the overall transmission rate of the secondary network, we formulate an optimization problem to find time ratio between taking backscatter and harvest-thentransmit modes. Through numerical results, we show that under the proposed model can achieve the overall transmission rate higher than using either the backscatter communication or the harvest-then-transmit protocol
Optimal Data Scheduling and Admission Control for Backscatter Sensor Networks
© 2017 IEEE. This paper studies the data scheduling and admission control problem for a backscatter sensor network (BSN). In the network, instead of initiating their own transmissions, the sensors can send their data to the gateway just by switching their antenna impedance and reflecting the received RF signals. As such, we can reduce remarkably the complexity, the power consumption, and the implementation cost of sensor nodes. Different sensors may have different functions, and data collected from each sensor may also have a different status, e.g., urgent or normal, and thus we need to take these factors into account. Therefore, in this paper, we first introduce a system model together with a mechanism in order to address the data collection and scheduling problem in the BSN. We then propose an optimization solution using the Markov decision process framework and a reinforcement learning algorithm based on the linear function approximation method, with the aim of finding the optimal data collection policy for the gateway. Through simulation results, we not only show the efficiency of the proposed solution compared with other baseline policies, but also present the analysis for data admission control policy under different classes of sensors as well as different types of data
Concurrent bandits and cognitive radio networks
We consider the problem of multiple users targeting the arms of a single
multi-armed stochastic bandit. The motivation for this problem comes from
cognitive radio networks, where selfish users need to coexist without any side
communication between them, implicit cooperation or common control. Even the
number of users may be unknown and can vary as users join or leave the network.
We propose an algorithm that combines an -greedy learning rule with a
collision avoidance mechanism. We analyze its regret with respect to the
system-wide optimum and show that sub-linear regret can be obtained in this
setting. Experiments show dramatic improvement compared to other algorithms for
this setting
Optimal Cross Slice Orchestration for 5G Mobile Services
© 2018 IEEE. 5G mobile networks encompass the capabilities of hosting a variety of services such as mobile social networks, multimedia delivery, healthcare, transportation, and public safety. Therefore, the major challenge in designing the 5G networks is how to support different types of users and applications with different quality-of-service requirements under a single physical network infrastructure. Recently, network slicing has been introduced as a promising solution to address this challenge. Network slicing allows programmable network instances which match the service requirements by using network virtualization technologies. However, how to efficiently allocate resources across network slices has not been well studied in the literature. Therefore, in this paper, we first introduce a model for orchestrating network slices based on the service requirements and available resources. Then, we propose a Markov decision process framework to formulate and determine the optimal policy that manages cross-slice admission control and resource allocation for the 5G networks. Through simulation results, we show that the proposed solution is efficient not only in providing slice-as-a-service based on service requirements, but also in maximizing the provider's revenue
A dynamic edge caching framework for mobile 5G networks
© 2002-2012 IEEE. Mobile edge caching has emerged as a new paradigm to provide computing, networking resources, and storage for a variety of mobile applications. That helps achieve low latency, high reliability, and improve efficiency in handling a very large number of smart devices and emerging services (e.g., IoT, industry automation, virtual reality) in mobile 5G networks. Nonetheless, the development of mobile edge caching is challenged by the decentralized nature of edge nodes, their small coverage, limited computing, and storage resources. In this article, we first give an overview of mobile edge caching in 5G networks. After that, its key challenges and current approaches are discussed. We then propose a novel caching framework. Our framework allows an edge node to authorize the legitimate users and dynamically predicts and updates their content demands using the matrix factorization technique. Based on the prediction, the edge node can adopt advanced optimization methods to determine optimal content to store so as to maximize its revenue and minimize the average delay of its mobile users. Through numerical results, we demonstrate that our proposed framework provides not only an effective caching approach, but also an efficient economic solution for the mobile service provider
- …