13,550 research outputs found

    2D Proactive Uplink Resource Allocation Algorithm for Event Based MTC Applications

    Full text link
    We propose a two dimension (2D) proactive uplink resource allocation (2D-PURA) algorithm that aims to reduce the delay/latency in event-based machine-type communications (MTC) applications. Specifically, when an event of interest occurs at a device, it tends to spread to the neighboring devices. Consequently, when a device has data to send to the base station (BS), its neighbors later are highly likely to transmit. Thus, we propose to cluster devices in the neighborhood around the event, also referred to as the disturbance region, into rings based on the distance from the original event. To reduce the uplink latency, we then proactively allocate resources for these rings. To evaluate the proposed algorithm, we analytically derive the mean uplink delay, the proportion of resource conservation due to successful allocations, and the proportion of uplink resource wastage due to unsuccessful allocations for 2D-PURA algorithm. Numerical results demonstrate that the proposed method can save over 16.5 and 27 percent of mean uplink delay, compared with the 1D algorithm and the standard method, respectively.Comment: 6 pages, 6 figures, Published in 2018 IEEE Wireless Communications and Networking Conference (WCNC

    6G White Paper on Machine Learning in Wireless Communication Networks

    Full text link
    The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented

    An LSTM-based Network Slicing Classification Future Predictive Framework for Optimized Resource Allocation in C-V2X

    Get PDF
    With the advent of 5G communication networks, many novel areas of research have emerged and the spectrum of communicating objects has been diversified. Network Function Virtualization (NFV), and Software Defined Networking (SDN), are the two broader areas that are tremendously being explored to optimize the network performance parameters. Cellular Vehicle-to-Everything (C-V2X) is one such example of where end-to-end communication is developed with the aid of intervening network slices. Adoption of these technologies enables a shift towards Ultra-Reliable Low-Latency Communication (URLLC) across various domains including autonomous vehicles that demand a hundred percent Quality of Service (QoS) and extremely low latency rates. Due to the limitation of resources to ensure such communication requirements, telecom operators are profoundly researching software solutions for network resource allocation optimally. The concept of Network Slicing (NS) emerged from such end-to-end network resource allocation where connecting devices are routed toward the suitable resources to meet their requirements. Nevertheless, the bias, in terms of finding the best slice, observed in the network slices renders a non-optimal distribution of resources. To cater to such issues, a Deep Learning approach has been developed in this paper. The incoming traffic has been allocated network slices based on data-driven decisions as well as predictive network analysis for the future. A Long Short Term Memory (LSTM) time series prediction approach has been adopted that renders optimal resource utilization, lower latency rates, and high reliability across the network. The model will further ensure packet prioritization and will retain resource margin for crucial ones

    A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning

    Full text link
    Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system. However, a complete cloud resource allocation framework exhibits high dimensions in state and action spaces, which prohibit the usefulness of traditional RL techniques. In addition, high power consumption has become one of the critical concerns in design and control of cloud computing systems, which degrades system reliability and increases cooling cost. An effective dynamic power management (DPM) policy should minimize power consumption while maintaining performance degradation within an acceptable level. Thus, a joint virtual machine (VM) resource allocation and power management framework is critical to the overall cloud computing system. Moreover, novel solution framework is necessary to address the even higher dimensions in state and action spaces. In this paper, we propose a novel hierarchical framework for solving the overall resource allocation and power management problem in cloud computing systems. The proposed hierarchical framework comprises a global tier for VM resource allocation to the servers and a local tier for distributed power management of local servers. The emerging deep reinforcement learning (DRL) technique, which can deal with complicated control problems with large state space, is adopted to solve the global tier problem. Furthermore, an autoencoder and a novel weight sharing structure are adopted to handle the high-dimensional state space and accelerate the convergence speed. On the other hand, the local tier of distributed server power managements comprises an LSTM based workload predictor and a model-free RL based power manager, operating in a distributed manner.Comment: accepted by 37th IEEE International Conference on Distributed Computing (ICDCS 2017
    • …
    corecore