652 research outputs found
Computation Rate Maximization for Wireless Powered Mobile-Edge Computing with Binary Computation Offloading
In this paper, we consider a multi-user mobile edge computing (MEC) network
powered by wireless power transfer (WPT), where each energy-harvesting WD
follows a binary computation offloading policy, i.e., data set of a task has to
be executed as a whole either locally or remotely at the MEC server via task
offloading. In particular, we are interested in maximizing the (weighted) sum
computation rate of all the WDs in the network by jointly optimizing the
individual computing mode selection (i.e., local computing or offloading) and
the system transmission time allocation (on WPT and task offloading). The major
difficulty lies in the combinatorial nature of multi-user computing mode
selection and its strong coupling with transmission time allocation. To tackle
this problem, we first consider a decoupled optimization, where we assume that
the mode selection is given and propose a simple bi-section search algorithm to
obtain the conditional optimal time allocation. On top of that, a coordinate
descent method is devised to optimize the mode selection. The method is simple
in implementation but may suffer from high computational complexity in a
large-size network. To address this problem, we further propose a joint
optimization method based on the ADMM (alternating direction method of
multipliers) decomposition technique, which enjoys much slower increase of
computational complexity as the networks size increases. Extensive simulations
show that both the proposed methods can efficiently achieve near-optimal
performance under various network setups, and significantly outperform the
other representative benchmark methods considered.Comment: This paper has been accepted for publication in IEEE Transactions on
Wireless Communication
DRAG: Deep Reinforcement Learning Based Base Station Activation in Heterogeneous Networks
Heterogeneous Network (HetNet), where Small cell Base Stations (SBSs) are
densely deployed to offload traffic from macro Base Stations (BSs), is
identified as a key solution to meet the unprecedented mobile traffic demand.
The high density of SBSs are designed for peak traffic hours and consume an
unnecessarily large amount of energy during off-peak time. In this paper, we
propose a deep reinforcement-learning based SBS activation strategy that
activates the optimal subset of SBSs to significantly lower the energy
consumption without compromising the quality of service. In particular, we
formulate the SBS on/off switching problem into a Markov Decision Process that
can be solved by Actor Critic (AC) reinforcement learning methods. To avoid
prohibitively high computational and storage costs of conventional
tabular-based approaches, we propose to use deep neural networks to approximate
the policy and value functions in the AC approach. Moreover, to expedite the
training process, we adopt a Deep Deterministic Policy Gradient (DDPG) approach
together with a novel action refinement scheme. Through extensive numerical
simulations, we show that the proposed scheme greatly outperforms the existing
methods in terms of both energy efficiency and computational efficiency. We
also show that the proposed scheme can scale to large system with polynomial
complexities in both storage and computation.Comment: 12 pages, 13 figure
User-Centric Joint Transmission in Virtual-Cell-Based Ultra-Dense Networks
In ultra-dense networks (UDNs), distributed radio access points (RAPs) are
configured into small virtual cells around mobile users for fair and
high-throughput services. In this correspondence, we evaluate the performance
of user-centric joint transmission (JT) in a UDN with a number of virtual
cells. In contrast to existing cooperation schemes, which assume constant RAP
transmit power, we consider a total transmit power constraint for each user,
and assume that the total power is optimally allocated to the RAPs in each
virtual cell using maximum ratio transmission (MRT). Based on stochastic
geometry models of the RAP and user locations, we resolve the correlation of
transmit powers introduced by MRT and derive the average user throughput.
Numerical results show that user-centric JT with MRT provides a high
signal-to-noise ratio (SNR) without generating severe interference to other
co-channel users. Moreover, we show that MRT precoding, while requiring
channel-state-information (CSI), is essential for the success of JT.Comment: Submitted to IEEE TVT correspondenc
Joint Spectrum Reservation and On-demand Request for Mobile Virtual Network Operators
With wireless network virtualization, Mobile Virtual Network Operators
(MVNOs) can develop new services on a low-cost platform by leasing virtual
resources from mobile network owners. In this paper, we investigate a two-stage
spectrum leasing framework, where an MVNO acquires radio spectrum through both
advance reservation and on-demand request. To maximize its surplus, the MVNO
jointly optimizes the amount of spectrum to lease in the two stages by taking
into account the traffic distribution, random user locations, wireless channel
statistics, Quality of Service (QoS) requirements, and the prices differences.
Meanwhile, the acquired spectrum resources are dynamically allocated to the
MVNO's mobile subscribers (users) according to fast channel fadings in order to
maximize the utilization of the resources. The MVNO's surplus maximization
problem is naturally formulated as a tri-level nested optimization problem that
consists of Dynamic Resource Allocation (DRA), on-demand request, and advance
reservation subproblems. To solve the problem efficiently, we rigorously
analyze the structure of the optimal solution in the DRA problem, and the
optimal value is used to find the optimal leasing decisions in the two stages.
In particular, we derive closed-form expressions of the optimal advance
reservation and on-demand requests when the proportional fair utility function
is adopted. We further extend the analysis to general utility functions and
derive a Stochastic Gradient Decent (SGD) algorithm to find the optimal leasing
decisions. Simulation results show that the two-stage spectrum leasing strategy
can take advantage of both the price discount of advance reservation and the
flexibility of on-demand request to deal with traffic variations.Comment: corrected typos; re-organise the presentation of the analytical
resul
Machine Learning for Heterogeneous Ultra-Dense Networks with Graphical Representations
Heterogeneous ultra-dense network (H-UDN) is envisioned as a promising
solution to sustain the explosive mobile traffic demand through network
densification. By placing access points, processors, and storage units as close
as possible to mobile users, H-UDNs bring forth a number of advantages,
including high spectral efficiency, high energy efficiency, and low latency.
Nonetheless, the high density and diversity of network entities in H-UDNs
introduce formidable design challenges in collaborative signal processing and
resource management. This article illustrates the great potential of machine
learning techniques in solving these challenges. In particular, we show how to
utilize graphical representations of H-UDNs to design efficient machine
learning algorithms
Super-Resolution Blind Channel-and-Signal Estimation for Massive MIMO with One-Dimensional Antenna Array
In this paper, we study blind channel-and-signal estimation by exploiting the
burst-sparse structure of angular-domain propagation channels in massive MIMO
systems. The state-of-the-art approach utilizes the structured channel sparsity
by sampling the angular-domain channel representation with a uniform
angle-sampling grid, a.k.a. virtual channel representation. However, this
approach is only applicable to uniform linear arrays and may cause a
substantial performance loss due to the mismatch between the virtual
representation and the true angle information. To tackle these challenges, we
propose a sparse channel representation with a super-resolution sampling grid
and a hidden Markovian support. Based on this, we develop a novel approximate
inference based blind estimation algorithm to estimate the channel and the user
signals simultaneously, with emphasis on the adoption of the
expectation-maximization method to learn the angle information. Furthermore, we
demonstrate the low-complexity implementation of our algorithm, making use of
factor graph and message passing principles to compute the marginal posteriors.
Numerical results show that our proposed method significantly reduces the
estimation error compared to the state-of-the-art approach under various
settings, which verifies the efficiency and robustness of our method.Comment: 16 pages, 10 figure
Matrix-Calibration-Based Cascaded Channel Estimation for Reconfigurable Intelligent Surface Assisted Multiuser MIMO
Reconfigurable intelligent surface (RIS) is envisioned to be an essential
component of the paradigm for beyond 5G networks as it can potentially provide
similar or higher array gains with much lower hardware cost and energy
consumption compared with the massive multiple-input multiple-output (MIMO)
technology. In this paper, we focus on one of the fundamental challenges,
namely the channel acquisition, in an RIS-assisted multiuser MIMO system. The
state-of-the-art channel acquisition approach in such a system with fully
passive RIS elements estimates the cascaded transmitter-to-RIS and
RIS-to-receiver channels by adopting excessively long training sequences. To
estimate the cascaded channels with an affordable training overhead, we
formulate the channel estimation problem in the RIS-assisted multiuser MIMO
system as a matrix-calibration based matrix factorization task. By exploiting
the information on the slow-varying channel components and the hidden channel
sparsity, we propose a novel message-passing based algorithm to factorize the
cascaded channels. Furthermore, we present an analytical framework to
characterize the theoretical performance bound of the proposed estimator in the
large-system limit. Finally, we conduct simulations to verify the high accuracy
and efficiency of the proposed algorithm.Comment: Accepted by IEEE Journal on Selected Areas in Communications. Matlab
demo code is available at
https://github.com/liuhang1994/Matrix-Calibration-Based-Cascaded-Channel-Estimatio
CNN-Based Signal Detection for Banded Linear Systems
Banded linear systems arise in many communication scenarios, e.g., those
involving inter-carrier interference and inter-symbol interference. Motivated
by recent advances in deep learning, we propose to design a high-accuracy
low-complexity signal detector for banded linear systems based on convolutional
neural networks (CNNs). We develop a novel CNN-based detector by utilizing the
banded structure of the channel matrix. Specifically, the proposed CNN-based
detector consists of three modules: the input preprocessing module, the CNN
module, and the output postprocessing module. With such an architecture, the
proposed CNN-based detector is adaptive to different system sizes, and can
overcome the curse of dimensionality, which is a ubiquitous challenge in deep
learning. Through extensive numerical experiments, we demonstrate that the
proposed CNN-based detector outperforms conventional deep neural networks and
existing model-based detectors in both accuracy and computational time.
Moreover, we show that CNN is flexible for systems with large sizes or wide
bands. We also show that the proposed CNN-based detector can be easily extended
to near-banded systems such as doubly selective orthogonal frequency division
multiplexing (OFDM) systems and 2-D magnetic recording (TDMR) systems, in which
the channel matrices do not have a strictly banded structure
Optimal Task Offloading and Resource Allocation in Mobile-Edge Computing with Inter-user Task Dependency
Mobile-edge computing (MEC) has recently emerged as a cost-effective paradigm
to enhance the computing capability of hardware-constrained wireless devices
(WDs). In this paper, we first consider a two-user MEC network, where each WD
has a sequence of tasks to execute. In particular, we consider task dependency
between the two WDs, where the input of a task at one WD requires the final
task output at the other WD. Under the considered task-dependency model, we
study the optimal task offloading policy and resource allocation (e.g., on
offloading transmit power and local CPU frequencies) that minimize the weighted
sum of the WDs' energy consumption and task execution time. The problem is
challenging due to the combinatorial nature of the offloading decisions among
all tasks and the strong coupling with resource allocation. To tackle this
problem, we first assume that the offloading decisions are given and derive the
closed-form expressions of the optimal offloading transmit power and local CPU
frequencies. Then, an efficient bi-section search method is proposed to obtain
the optimal solutions. Furthermore, we prove that the optimal offloading
decisions follow an one-climb policy, based on which a reduced-complexity Gibbs
Sampling algorithm is proposed to obtain the optimal offloading decisions. We
then extend the investigation to a general multi-user scenario, where the input
of a task at one WD requires the final task outputs from multiple other WDs.
Numerical results show that the proposed method can significantly outperform
the other representative benchmarks and efficiently achieve low complexity with
respect to the call graph size.Comment: This paper has been accepted for publication in IEEE Transactions on
Wireless Communication
Joint Optimization of Service Caching Placement and Computation Offloading in Mobile Edge Computing Systems
In mobile edge computing (MEC) systems, edge service caching refers to
pre-storing the necessary programs for executing computation tasks at MEC
servers. At resource-constrained edge servers, service caching placement is in
general a complicated problem that highly correlates to the offloading
decisions of computation tasks. In this paper, we consider a single edge server
that assists a mobile user (MU) in executing a sequence of computation tasks.
In particular, the MU can run its customized programs at the edge server, while
the server can selectively cache the previously generated programs for future
service reuse. To minimize the computation delay and energy consumption of the
MU, we formulate a mixed integer non-linear programming (MINLP) that jointly
optimizes the service caching placement, computation offloading, and system
resource allocation. We first derive the closed-form expressions of the optimal
resource allocation, and subsequently transform the MINLP into an equivalent
pure 0-1 integer linear programming (ILP). To further reduce the complexity in
solving the ILP, we exploit the underlying structures in optimal solutions, and
devise a reduced-complexity alternating minimization technique to update the
caching placement and offloading decision alternately. Simulations show that
the proposed techniques achieve substantial resource savings compared to other
representative benchmark methods.Comment: The paper has been accepted for publication by IEEE Transactions on
Wireless Communications (April 2020
- …