4,619 research outputs found
D2D-Based Grouped Random Access to Mitigate Mobile Access Congestion in 5G Sensor Networks
The Fifth Generation (5G) wireless service of sensor networks involves
significant challenges when dealing with the coordination of ever-increasing
number of devices accessing shared resources. This has drawn major interest
from the research community as many existing works focus on the radio access
network congestion control to efficiently manage resources in the context of
device-to-device (D2D) interaction in huge sensor networks. In this context,
this paper pioneers a study on the impact of D2D link reliability in
group-assisted random access protocols, by shedding the light on beneficial
performance and potential limitations of approaches of this kind against
tunable parameters such as group size, number of sensors and reliability of D2D
links. Additionally, we leverage on the association with a Geolocation Database
(GDB) capability to assist the grouping decisions by drawing parallels with
recent regulatory-driven initiatives around GDBs and arguing benefits of the
suggested proposal. Finally, the proposed method is approved to significantly
reduce the delay over random access channels, by means of an exhaustive
simulation campaign.Comment: First submission to IEEE Communications Magazine on Oct.28.2017.
Accepted on Aug.18.2019. This is the camera-ready versio
Machine Learning at the Edge: A Data-Driven Architecture with Applications to 5G Cellular Networks
The fifth generation of cellular networks (5G) will rely on edge cloud
deployments to satisfy the ultra-low latency demand of future applications. In
this paper, we argue that such deployments can also be used to enable advanced
data-driven and Machine Learning (ML) applications in mobile networks. We
propose an edge-controller-based architecture for cellular networks and
evaluate its performance with real data from hundreds of base stations of a
major U.S. operator. In this regard, we will provide insights on how to
dynamically cluster and associate base stations and controllers, according to
the global mobility patterns of the users. Then, we will describe how the
controllers can be used to run ML algorithms to predict the number of users in
each base station, and a use case in which these predictions are exploited by a
higher-layer application to route vehicular traffic according to network Key
Performance Indicators (KPIs). We show that the prediction accuracy improves
when based on machine learning algorithms that rely on the controllers' view
and, consequently, on the spatial correlation introduced by the user mobility,
with respect to when the prediction is based only on the local data of each
single base station.Comment: 15 pages, 10 figures, 5 tables. IEEE Transactions on Mobile Computin
Distributed Cognitive RAT Selection in 5G Heterogeneous Networks: A Machine Learning Approach
The leading role of the HetNet (Heterogeneous Networks) strategy as the key Radio Access Network (RAN) architecture for future 5G networks poses serious challenges to the current cell selection mechanisms used in cellular networks. The max-SINR algorithm, although effective historically for performing the most essential networking function of wireless networks, is inefficient at best and obsolete at worst in 5G HetNets. The foreseen embarrassment of riches and diversified propagation characteristics of network attachment points spanning multiple Radio Access Technologies (RAT) requires novel and creative context-aware system designs. The association and routing decisions, in the context of single-RAT or multi-RAT connections, need to be optimized to efficiently exploit the benefits of the architecture. However, the high computational complexity required for multi-parametric optimization of utility functions, the difficulty of modeling and solving Markov Decision Processes, the lack of guarantees of stability of Game Theory algorithms, and the rigidness of simpler methods like Cell Range Expansion and operator policies managed by the Access Network Discovery and Selection Function (ANDSF), makes neither of these state-of-the-art approaches a favorite. This Thesis proposes a framework that relies on Machine Learning techniques at the terminal device-level for Cognitive RAT Selection. The use of cognition allows the terminal device to learn both a multi-parametric state model and effective decision policies, based on the experience of the device itself. This implies that a terminal, after observing its environment during a learning period, may formulate a system characterization and optimize its own association decisions without any external intervention. In our proposal, this is achieved through clustering of appropriately defined feature vectors for building a system state model, supervised classification to obtain the current system state, and reinforcement learning for learning good policies. This Thesis describes the above framework in detail and recommends adaptations based on the experimentation with the X-means, k-Nearest Neighbors, and Q-learning algorithms, the building blocks of the solution. The network performance of the proposed framework is evaluated in a multi-agent environment implemented in MATLAB where it is compared with alternative RAT selection mechanisms
Carrier Sense Random Packet CDMA Protocol in Dual-Channel Networks
Code resource wastage is caused by the reason that many hopping frequency (FH) sequences are unused, which occurs under the condition that the number of the actual subnets needed for the tactical network is far smaller than the networking capacity of code division net¬working. Dual-channel network (DCN), consisting of one single control channel and multiple data channels, can solve the code resource wastage effectively. To improve the anti-jamming capability of the control channel of DCN, code division multiple access (CDMA) technology was introduced, and a carrier sense random packet (CSRP) CDMA protocol based on random packet CDMA (RP-CDMA) was proposed. In CSRP-CDMA, we provide a carrier sensing random packet mechanism and a packet-segment acknowledgement policy. Furthermore, an analytical model was developed to evaluate the performance of CSRP-CDMA networks. In this model, the impacts of multi-access interference from both inter-clusters and intra-clusters were analyzed, and the mathematical expressions of packet transmission success probability, normalized network throughput and signal interference to noise ratio, were also derived. Analytical and simulation results demonstrate that the normalized network throughput of CSRP-CDMA outperforms traditional RP-CDMA by 10%, which can guarantee the resource utilization efficiency of the control channel in DCNs
- …