1,952 research outputs found
Energy Efficient virtualization framework for 5G F-RAN
Fog radio access network (F-RAN) and virtualisation are promising
technologies for 5G networks. In F-RAN, the fog and cloud computing are
integrated where the conventional C-RAN functions are diverged to the edge
devices of radio access networks. F-RAN is adopted to mitigate the burden of
front-haul and improve the end to end (E2E) latency. On other hand,
virtualization and network function virtualization (NFV) are IT techniques that
aim to convert the functions from hardware to software based functions. Many
merits could be brought by the employment of NFV in mobile networks including a
high degree of reliability, flexibility and energy efficiency. In this paper, a
virtualization framework is introduced for F-RAN to improve the energy
efficiency in 5G networks. In this framework, a gigabit passive optical network
(GPON) is leveraged as a backbone network for the proposed F-RAN architecture
where it connects several evolved nodes B (eNodeBs) via fibre cables. The
energy-efficiency of the proposed F-RAN architecture has been investigated and
compared with the conventional C-RAN architecture in two different scenarios
using mixed integer linear programming (MILP) models. The MILP results indicate
that on average a 30% power saving can be achieved by the F-RAN architecture
compared with the C-RAN architecture.Comment: ICTON 201
Delay Characterization of Mobile Edge Computing for 6G Time-Sensitive Services
Time-sensitive services (TSSs) have been widely envisioned for future sixth
generation (6G) wireless communication networks. Due to its inherent
low-latency advantage, mobile edge computing (MEC) will be an indispensable
enabler for TSSs. The random characteristics of the delay experienced by users
are key metrics reflecting the quality of service (QoS) of TSSs. Most existing
studies on MEC have focused on the average delay. Only a few research efforts
have been devoted to other random delay characteristics, such as the delay
bound violation probability and the probability distribution of the delay, by
decoupling the transmission and computation processes of MEC. However, if these
two processes could not be decoupled, the coupling will bring new challenges to
analyzing the random delay characteristics. In this paper, an MEC system with a
limited computation buffer at the edge server is considered. In this system,
the transmission process and computation process form a feedback loop and could
not be decoupled. We formulate a discrete-time two-stage tandem queueing
system. Then, by using the matrix-geometric method, we obtain the estimation
methods for the random delay characteristics, including the probability
distribution of the delay, the delay bound violation probability, the average
delay and the delay standard deviation. The estimation methods are verified by
simulations. The random delay characteristics are analyzed by numerical
experiments, which unveil the coupling relationship between the transmission
process and computation process for MEC. These results will largely facilitate
elaborate allocation of communication and computation resources to improve the
QoS of TSSs.Comment: 17 pages, 11 figures. This paper has been accepted by IEEE Internet
of Things Journa
Performance Analysis of Network Coding with IEEE 802.11 DCF in Multi-Hop Wireless Networks
Network coding is an effective idea to boost the capacity of wireless
networks, and a variety of studies have explored its advantages in different
scenarios. However, there is not much analytical study on throughput and
end-to-end delay of network coding in multi-hop wireless networks considering
the specifications of IEEE 802.11 Distributed Coordination Function. In this
paper, we utilize queuing theory to propose an analytical framework for
bidirectional unicast flows in multi-hop wireless mesh networks. We study the
throughput and end-to-end delay of inter-flow network coding under the IEEE
802.11 standard with CSMA/CA random access and exponential back-off time
considering clock freezing and virtual carrier sensing, and formulate several
parameters such as the probability of successful transmission in terms of bit
error rate and collision probability, waiting time of packets at nodes, and
retransmission mechanism. Our model uses a multi-class queuing network with
stable queues, where coded packets have a non-preemptive higher priority over
native packets, and forwarding of native packets is not delayed if no coding
opportunities are available. Finally, we use computer simulations to verify the
accuracy of our analytical model.Comment: 14 pages, 11 figures, IEEE Transactions on Mobile Computing, 201
Online Learning for Offloading and Autoscaling in Energy Harvesting Mobile Edge Computing
Mobile edge computing (a.k.a. fog computing) has recently emerged to enable
in-situ processing of delay-sensitive applications at the edge of mobile
networks. Providing grid power supply in support of mobile edge computing,
however, is costly and even infeasible (in certain rugged or under-developed
areas), thus mandating on-site renewable energy as a major or even sole power
supply in increasingly many scenarios. Nonetheless, the high intermittency and
unpredictability of renewable energy make it very challenging to deliver a high
quality of service to users in energy harvesting mobile edge computing systems.
In this paper, we address the challenge of incorporating renewables into mobile
edge computing and propose an efficient reinforcement learning-based resource
management algorithm, which learns on-the-fly the optimal policy of dynamic
workload offloading (to the centralized cloud) and edge server provisioning to
minimize the long-term system cost (including both service delay and
operational cost). Our online learning algorithm uses a decomposition of the
(offline) value iteration and (online) reinforcement learning, thus achieving a
significant improvement of learning rate and run-time performance when compared
to standard reinforcement learning algorithms such as Q-learning. We prove the
convergence of the proposed algorithm and analytically show that the learned
policy has a simple monotone structure amenable to practical implementation.
Our simulation results validate the efficacy of our algorithm, which
significantly improves the edge computing performance compared to fixed or
myopic optimization schemes and conventional reinforcement learning algorithms.Comment: arXiv admin note: text overlap with arXiv:1701.01090 by other author
Energy and Information Management of Electric Vehicular Network: A Survey
The connected vehicle paradigm empowers vehicles with the capability to
communicate with neighboring vehicles and infrastructure, shifting the role of
vehicles from a transportation tool to an intelligent service platform.
Meanwhile, the transportation electrification pushes forward the electric
vehicle (EV) commercialization to reduce the greenhouse gas emission by
petroleum combustion. The unstoppable trends of connected vehicle and EVs
transform the traditional vehicular system to an electric vehicular network
(EVN), a clean, mobile, and safe system. However, due to the mobility and
heterogeneity of the EVN, improper management of the network could result in
charging overload and data congestion. Thus, energy and information management
of the EVN should be carefully studied. In this paper, we provide a
comprehensive survey on the deployment and management of EVN considering all
three aspects of energy flow, data communication, and computation. We first
introduce the management framework of EVN. Then, research works on the EV
aggregator (AG) deployment are reviewed to provide energy and information
infrastructure for the EVN. Based on the deployed AGs, we present the research
work review on EV scheduling that includes both charging and vehicle-to-grid
(V2G) scheduling. Moreover, related works on information communication and
computing are surveyed under each scenario. Finally, we discuss open research
issues in the EVN
Deep Learning for Hybrid 5G Services in Mobile Edge Computing Systems: Learn from a Digital Twin
In this work, we consider a mobile edge computing system with both
ultra-reliable and low-latency communications services and delay tolerant
services. We aim to minimize the normalized energy consumption, defined as the
energy consumption per bit, by optimizing user association, resource
allocation, and offloading probabilities subject to the quality-of-service
requirements. The user association is managed by the mobility management entity
(MME), while resource allocation and offloading probabilities are determined by
each access point (AP). We propose a deep learning (DL) architecture, where a
digital twin of the real network environment is used to train the DL algorithm
off-line at a central server. From the pre-trained deep neural network (DNN),
the MME can obtain user association scheme in a real-time manner. Considering
that real networks are not static, the digital twin monitors the variation of
real networks and updates the DNN accordingly. For a given user association
scheme, we propose an optimization algorithm to find the optimal resource
allocation and offloading probabilities at each AP. Simulation results show
that our method can achieve lower normalized energy consumption with less
computation complexity compared with an existing method and approach to the
performance of the global optimal solution.Comment: To appear in IEEE Trans. on Wireless Commun. (accepted with minor
revision
Towards a Queueing-Based Framework for In-Network Function Computation
We seek to develop network algorithms for function computation in sensor
networks. Specifically, we want dynamic joint aggregation, routing, and
scheduling algorithms that have analytically provable performance benefits due
to in-network computation as compared to simple data forwarding. To this end,
we define a class of functions, the Fully-Multiplexible functions, which
includes several functions such as parity, MAX, and k th -order statistics. For
such functions we exactly characterize the maximum achievable refresh rate of
the network in terms of an underlying graph primitive, the min-mincut. In
acyclic wireline networks, we show that the maximum refresh rate is achievable
by a simple algorithm that is dynamic, distributed, and only dependent on local
information. In the case of wireless networks, we provide a MaxWeight-like
algorithm with dynamic flow splitting, which is shown to be throughput-optimal
Radio Resource Management for New Application Scenarios in 5G: Optimization and Deep Learning
The fifth-generation (5G) New Radio (NR) systems are expected to support a wide range of emerging applications with diverse Quality-of-Service (QoS) requirements. New application scenarios in 5G NR include enhanced mobile broadband (eMBB), massive machine-type communication (mMTC), and ultra-reliable low-latency communications (URLLC). New wireless architectures, such as full-dimension (FD) massive multiple-input multiple-output (MIMO) and mobile edge computing (MEC) system, and new coding scheme, such as short block-length channel coding, are envisioned as enablers of QoS requirements for 5G NR applications. Resource management in these new wireless architectures is crucial in guaranteeing the QoS requirements of 5G NR systems. The traditional optimization problems, such as subcarriers and user association, are usually non-convex or Non-deterministic Polynomial-time (NP)-hard. It is time-consuming and computing-expensive to find the optimal solution, especially in a large-scale network. To solve these problems, one approach is to design a low-complexity algorithm with near optimal performance. In some cases, the low complexity algorithms are hard to obtain, deep learning can be used as an accurate approximator that maps environment parameters, such as the channel state information and traffic state, to the optimal solutions. In this thesis, we design low-complexity optimization algorithms, and deep learning frameworks in different architectures of 5G NR to resolve optimization problems subject to QoS requirements. First, we propose a low-complexity algorithm for a joint cooperative beamforming and user association problem for eMBB in 5G NR to maximize the network capacity. Next, we propose a deep learning (DL) framework to optimize user association, resource allocation, and offloading probabilities for delay-tolerant services and URLLC in 5G NR. Finally, we address the issue of time-varying traffic and network conditions on resource management in 5G NR
Toward Low-Cost and Stable Blockchain Networks
Envisioned to be the future of secured distributed systems, blockchain
networks have received increasing attention from both the industry and academia
in recent years. However, blockchain mining processes demand high hardware
costs and consume a vast amount of energy (studies have shown that the amount
of energy consumed in Bitcoin mining is almost the same as the electricity used
in Ireland). To address the high mining cost problem of blockchain networks, in
this paper, we propose a blockchain mining resources allocation algorithm to
reduce the mining cost in PoW-based (proof-of-work-based) blockchain networks.
We first propose an analytical queueing model for general blockchain networks.
In our queueing model, transactions arrive randomly to the queue and are served
in a batch manner with unknown service rate probability distribution and
agnostic to any priority mechanism. Then, we leverage the Lyapunov optimization
techniques to propose a dynamic mining resources allocation algorithm (DMRA),
which is parameterized by a tuning parameter . We show that our algorithm
achieves an cost-optimality-gap-vs-delay tradeoff. Our
simulation results also demonstrate the effectiveness of DMRA in reducing
mining costs.Comment: Accepted by IEEE ICC 202
Delay Performance of the Multiuser MISO Downlink
We analyze a MISO downlink channel where a multi-antenna transmitter
communicates with a large number of single-antenna receivers. Using linear
beamforming or nonlinear precoding techniques, the transmitter can serve
multiple users simultaneously during each transmission slot. However,
increasing the number of users, i.e., the multiplexing gain, reduces the
beamforming gain, which means that the average of the individual data rates
decreases and their variance increases. We use stochastic network calculus to
analyze the queueing delay that occurs due to the time-varying data rates. Our
results show that the optimal number of users, i.e., the optimal trade-off
between multiplexing gain and beamforming gain, depends on incoming data
traffic and its delay requirements
- …