33 research outputs found

    A simplified optimization for resource management in cognitive radio network-based internet-of-things over 5G networks

    Get PDF
    With increasing evolution of applications and services in internet-of-things (IoT), there is an increasing concern of offering superior quality of service to its ever-increasing user base. This demand can be fulfilled by harnessing the potential of cognitive radio network (CRN) where better accessibility of services and resources can be achieved. However, existing review of literature shows that there are still open-end issues in this regard and hence, the proposed system offers a solution to address this problem. This paper presents a model which is capable of performing an optimization of resources when CRN is integrated in IoT using five generation (5G) network. The implementation uses analytical modeling to frame up the process of topology construction for IoT and optimizing the resources by introducing a simplified data transmission mechanism in IoT environment. The study outcome shows proposed system to excel better performance with respect to throughput and response time in comparison to existing schemes

    Interference Management with Dynamic Resource Allocation Method on Ultra-Dense Networks in Femto-Macrocellular Network

    Get PDF
    Ultra-Dense Network (UDN) which is formed from femtocells densely deployed is known as one of key technologies for 5th generation (5G) cellular networks. UDN promises for increased capacity and quality of cellular networks. However, UDN faces more complex interference problems than rarely deployed femtocells, worse on femtocells that are located on cell edge area of macrocell. Therefore, mitigating or reducing effects of interferences is an important issue in UDN. This paper focuses on interference management using dynamic resource allocation for UDN. Types of interference considered in this study are cross-tier (macrocell-to-femtocell) and co-tier (femtocellto-femtocell) interferences for uplink transmission. We consider several scenarios to examine the dynamic resource allocation method for UDN in case of femtocells deployed in the whole area of microcell and in the cell edge area of macrocell. Simulation experiment using MATLAB program has been carried out. The performance parameters that are collected from the simulation are Signal to Interference and Noise Ratio (SINR), throughput, and Bit Error Rate (BER). The obtained simulation results show that system using dynamic resource allocation method outperforms conventional system and the results were consistent for the collected performance parameters. The dynamic resource allocation promises to reduce the effects of interference in UDN

    An effective communication and computation model based on a hybridgraph-deeplearning approach for SIoT.

    Get PDF
    Social Edge Service (SES) is an emerging mechanism in the Social Internet of Things (SIoT) orchestration for effective user-centric reliable communication and computation. The services are affected by active and/or passive attacks such as replay attacks, message tampering because of sharing the same spectrum, as well as inadequate trust measurement methods among intelligent devices (roadside units, mobile edge devices, servers) during computing and content-sharing. These issues lead to computation and communication overhead of servers and computation nodes. To address this issue, we propose the HybridgrAph-Deep-learning (HAD) approach in two stages for secure communication and computation. First, the Adaptive Trust Weight (ATW) model with relation-based feedback fusion analysis to estimate the fitness-priority of every node based on directed graph theory to detect malicious nodes and reduce computation and communication overhead. Second, a Quotient User-centric Coeval-Learning (QUCL) mechanism to formulate secure channel selection, and Nash equilibrium method for optimizing the communication to share data over edge devices. The simulation results confirm that our proposed approach has achieved effective communication and computation performance, and enhanced Social Edge Services (SES) reliability than state-of-the-art approaches

    On the Intersection of Communication and Machine Learning

    Get PDF
    The intersection of communication and machine learning is attracting increasing interest from both communities. On the one hand, the development of modern communication system brings large amount of data and high performance requirement, which challenges the classic analytical-derivation based study philosophy and encourages the researchers to explore the data driven method, such as machine learning, to solve the problems with high complexity and large scale. On the other hand, the usage of distributed machine learning introduces the communication cost as one of the basic considerations for the design of machine learning algorithm and system.In this thesis, we first explore the application of machine learning on one of the classic problems in wireless network, resource allocation, for heterogeneous millimeter wave networks when the environment is with high dynamics. We address the practical concerns by providing the efficient online and distributed framework. In the second part, some sampling based communication-efficient distributed learning algorithm is proposed. We utilize the trade-off between the local computation and the total communication cost and propose the algorithm with good theoretical bound. In more detail, this thesis makes the following contributionsWe introduced an reinforcement learning framework to solve the resource allocation problems in heterogeneous millimeter wave network. The large state/action space is decomposed according to the topology of the network and solved by an efficient distribtued message passing algorithm. We further speed up the inference process by an online updating process.We proposed the distributed coreset based boosting framework. An efficient coreset construction algorithm is proposed based on the prior knowledge provided by clustering. Then the coreset is integrated with boosting with improved convergence rate. We extend the proposed boosting framework to the distributed setting, where the communication cost is reduced by the good approximation of coreset.We propose an selective sampling framework to construct a subset of sample that could effectively represent the model space. Based on the prior distribution of the model space or the large amount of samples from model space, we derive a computational efficient method to construct such subset by minimizing the error of classifying a classifier

    An Extensive Study on the Performance Evaluation and Scheduling of HeNBs

    Get PDF
    Since the dawn of mobile communication systems, reducing the cell size has been one option to increase the signal-to-interference-plus-noise ratio (SINR) in both links. The impact of this reduction can be perfectly understood by considering Shannon’s law. This work studies in detail the performance of Home eNBs (HeNBs), nodes with a smaller coverage area. After a detailed theoretical study of the SINR, a simulation approach is used to extract performance results in small cell indoor scenarios. Results corresponding to the goodput, delay and packet loss ratio are analyzed. Based on an improved version of LTE-Sim, the proportional fair, frame level scheduler (FLS) and exponential rule are tested in an indoor environment. With the saturation conditions taken into consideration, the FLS performs better than the other schedulers. This work shows that with the considered applications, it is possible to achieve a reduction in the transmitter power of HeNBs without compromising the small cell network performance.This work was supported by Foundation for Science and Technology/Ministry of Science, Technology and Higher Education (FCT/MCTES) through national funds and, when applicable, co-funded EU funds under the project UIDB/50008/2020, COST CA 15104 Inclusive Radio Communication Networks for 5G and Beyond (IRACON), Optical Radio Convergence Infrastructure for Communications and Power Delivering (ORCIP, 22141-01/SAICT/2016), TeamUp5G and CONQUEST (CMU/ECE/0030/2017). The TeamUp5G project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie project number 813391.info:eu-repo/semantics/publishedVersio

    A model-driven deep reinforcement learning heuristic algorithm for resource allocation in ultra-dense cellular networks

    Get PDF
    Resource allocation in ultra dense network (UDN) is an multi-objective optimization problem since it has to consider the tradeoff among spectrum efficiency (SE), energy efficiency (EE) and fairness. The existing methods can not effectively solve this NP-hard nonconvex problem, especially in the presence of limited channel state information (CSI). In this paper, we investigate a novel model-driven deep reinforcement learning assisted resource allocation method. We first design a novel deep neural network (DNN)-based optimization framework consisting of a series of Alternating Direction Method of Multipliers (ADMM) iterative procedures, which makes the CSI as the learned weights. Then a novel channel information absent Q-learning resource allocation (CIAQ) algorithm is proposed to train the DNN-based optimization framework without massive labeling data, where the SE, the EE, and the fairness can be jointly optimized by adjusting discount factor. Our simulation results show that, the proposed CIAQ with rapid convergence speed not only well characterizes the extent of optimization objective with partial CSI, but also significantly outperforms the current random initialization method of neural network and the other existing resource allocation algorithms in term of the tradeoff among the SE, EE and fairness

    5G and beyond networks

    Get PDF
    This chapter investigates the Network Layer aspects that will characterize the merger of the cellular paradigm and the IoT architectures, in the context of the evolution towards 5G-and-beyond, including some promising emerging services as Unmanned Aerial Vehicles or Base Stations, and V2X communications

    D4.2 Intelligent D-Band wireless systems and networks initial designs

    Get PDF
    This deliverable gives the results of the ARIADNE project's Task 4.2: Machine Learning based network intelligence. It presents the work conducted on various aspects of network management to deliver system level, qualitative solutions that leverage diverse machine learning techniques. The different chapters present system level, simulation and algorithmic models based on multi-agent reinforcement learning, deep reinforcement learning, learning automata for complex event forecasting, system level model for proactive handovers and resource allocation, model-driven deep learning-based channel estimation and feedbacks as well as strategies for deployment of machine learning based solutions. In short, the D4.2 provides results on promising AI and ML based methods along with their limitations and potentials that have been investigated in the ARIADNE project

    Machine Learning for Unmanned Aerial System (UAS) Networking

    Get PDF
    Fueled by the advancement of 5G new radio (5G NR), rapid development has occurred in many fields. Compared with the conventional approaches, beamforming and network slicing enable 5G NR to have ten times decrease in latency, connection density, and experienced throughput than 4G long term evolution (4G LTE). These advantages pave the way for the evolution of Cyber-physical Systems (CPS) on a large scale. The reduction of consumption, the advancement of control engineering, and the simplification of Unmanned Aircraft System (UAS) enable the UAS networking deployment on a large scale to become feasible. The UAS networking can finish multiple complex missions simultaneously. However, the limitations of the conventional approaches are still a big challenge to make a trade-off between the massive management and efficient networking on a large scale. With 5G NR and machine learning, in this dissertation, my contributions can be summarized as the following: I proposed a novel Optimized Ad-hoc On-demand Distance Vector (OAODV) routing protocol to improve the throughput of Intra UAS networking. The novel routing protocol can reduce the system overhead and be efficient. To improve the security, I proposed a blockchain scheme to mitigate the malicious basestations for cellular connected UAS networking and a proof-of-traffic (PoT) to improve the efficiency of blockchain for UAS networking on a large scale. Inspired by the biological cell paradigm, I proposed the cell wall routing protocols for heterogeneous UAS networking. With 5G NR, the inter connections between UAS networking can strengthen the throughput and elasticity of UAS networking. With machine learning, the routing schedulings for intra- and inter- UAS networking can enhance the throughput of UAS networking on a large scale. The inter UAS networking can achieve the max-min throughput globally edge coloring. I leveraged the upper and lower bound to accelerate the optimization of edge coloring. This dissertation paves a way regarding UAS networking in the integration of CPS and machine learning. The UAS networking can achieve outstanding performance in a decentralized architecture. Concurrently, this dissertation gives insights into UAS networking on a large scale. These are fundamental to integrating UAS and National Aerial System (NAS), critical to aviation in the operated and unmanned fields. The dissertation provides novel approaches for the promotion of UAS networking on a large scale. The proposed approaches extend the state-of-the-art of UAS networking in a decentralized architecture. All the alterations can contribute to the establishment of UAS networking with CPS
    corecore