5,285 research outputs found

    Transfer Learning for Mixed-Integer Resource Allocation Problems in Wireless Networks

    Full text link
    Effective resource allocation plays a pivotal role for performance optimization in wireless networks. Unfortunately, typical resource allocation problems are mixed-integer nonlinear programming (MINLP) problems, which are NP-hard. Machine learning based methods recently emerge as a disruptive way to obtain near-optimal performance for MINLP problems with affordable computational complexity. However, they suffer from severe performance deterioration when the network parameters change, which commonly happens in practice and can be characterized as the task mismatch issue. In this paper, we propose a transfer learning method via self-imitation, to address this issue for effective resource allocation in wireless networks. It is based on a general "learning to optimize" framework for solving MINLP problems. A unique advantage of the proposed method is that it can tackle the task mismatch issue with a few additional unlabeled training samples, which is especially important when transferring to large-size problems. Numerical experiments demonstrate that with much less training time, the proposed method achieves comparable performance with the model trained from scratch with sufficient amount of labeled samples. To the best of our knowledge, this is the first work that applies transfer learning for resource allocation in wireless networks

    LORM: Learning to Optimize for Resource Management in Wireless Networks with Few Training Samples

    Full text link
    Effective resource management plays a pivotal role in wireless networks, which, unfortunately, results in challenging mixed-integer nonlinear programming (MINLP) problems in most cases. Machine learning-based methods have recently emerged as a disruptive way to obtain near-optimal performance for MINLPs with affordable computational complexity. There have been some attempts in applying such methods to resource management in wireless networks, but these attempts require huge amounts of training samples and lack the capability to handle constrained problems. Furthermore, they suffer from severe performance deterioration when the network parameters change, which commonly happens and is referred to as the task mismatch problem. In this paper, to reduce the sample complexity and address the feasibility issue, we propose a framework of Learning to Optimize for Resource Management (LORM). Instead of the end-to-end learning approach adopted in previous studies, LORM learns the optimal pruning policy in the branch-and-bound algorithm for MINLPs via a sample-efficient method, namely, imitation learning. To further address the task mismatch problem, we develop a transfer learning method via self-imitation in LORM, named LORM-TL, which can quickly adapt a pre-trained machine learning model to the new task with only a few additional unlabeled training samples. Numerical simulations will demonstrate that LORM outperforms specialized state-of-the-art algorithms and achieves near-optimal performance, while achieving significant speedup compared with the branch-and-bound algorithm. Moreover, LORM-TL, by relying on a few unlabeled samples, achieves comparable performance with the model trained from scratch with sufficient labeled samples.Comment: arXiv admin note: text overlap with arXiv:1811.0710

    Computation Rate Maximization in UAV-Enabled Wireless Powered Mobile-Edge Computing Systems

    Full text link
    Mobile edge computing (MEC) and wireless power transfer (WPT) are two promising techniques to enhance the computation capability and to prolong the operational time of low-power wireless devices that are ubiquitous in Internet of Things. However, the computation performance and the harvested energy are significantly impacted by the severe propagation loss. In order to address this issue, an unmanned aerial vehicle (UAV)-enabled MEC wireless powered system is studied in this paper. The computation rate maximization problems in a UAV-enabled MEC wireless powered system are investigated under both partial and binary computation offloading modes, subject to the energy harvesting causal constraint and the UAV's speed constraint. These problems are non-convex and challenging to solve. A two-stage algorithm and a three-stage alternative algorithm are respectively proposed for solving the formulated problems. The closed-form expressions for the optimal central processing unit frequencies, user offloading time, and user transmit power are derived. The optimal selection scheme on whether users choose to locally compute or offload computation tasks is proposed for the binary computation offloading mode. Simulation results show that our proposed resource allocation schemes outperforms other benchmark schemes. The results also demonstrate that the proposed schemes converge fast and have low computational complexity.Comment: This paper has been accepted by IEEE JSA

    Spatial Domain Simultaneous Information and Power Transfer for MIMO Channels

    Full text link
    In this paper, we theoretically investigate a new technique for simultaneous information and power transfer (SWIPT) in multiple-input multiple-output (MIMO) point-to-point with radio frequency energy harvesting capabilities. The proposed technique exploits the spatial decomposition of the MIMO channel and uses the eigenchannels either to convey information or to transfer energy. In order to generalize our study, we consider channel estimation error in the decomposition process and the interference between the eigenchannels. An optimization problem that minimizes the total transmitted power subject to maximum power per eigenchannel, information and energy constraints is formulated as a mixed-integer nonlinear program and solved to optimality using mixed-integer second-order cone programming. A near-optimal mixed-integer linear programming solution is also developed with robust computational performance. A polynomial complexity algorithm is further proposed for the optimal solution of the problem when no maximum power per eigenchannel constraints are imposed. In addition, a low polynomial complexity algorithm is developed for the power allocation problem with a given eigenchannel assignment, as well as a low-complexity heuristic for solving the eigenchannel assignment problem.Comment: 14 pages, 5 figures, Accepted for publication in IEEE Trans. on Wireless Communication

    Adaptive Task Allocation for Mobile Edge Learning

    Full text link
    This paper aims to establish a new optimization paradigm for implementing realistic distributed learning algorithms, with performance guarantees, on wireless edge nodes with heterogeneous computing and communication capacities. We will refer to this new paradigm as `Mobile Edge Learning (MEL)'. The problem of dynamic task allocation for MEL is considered in this paper with the aim to maximize the learning accuracy, while guaranteeing that the total times of data distribution/aggregation over heterogeneous channels, and local computing iterations at the heterogeneous nodes, are bounded by a preset duration. The problem is first formulated as a quadratically-constrained integer linear problem. Being an NP-hard problem, the paper relaxes it into a non-convex problem over real variables. We thus proposed two solutions based on deriving analytical upper bounds of the optimal solution of this relaxed problem using Lagrangian analysis and KKT conditions, and the use of suggest-and-improve starting from equal batch allocation, respectively. The merits of these proposed solutions are exhibited by comparing their performances to both numerical approaches and the equal task allocation approach.Comment: 8 pages, 2 figures, submitted to IEEE WCNC Workshop 2019, Morocc

    A Survey on Device-to-Device Communication in Cellular Networks

    Full text link
    Device-to-Device (D2D) communication was initially proposed in cellular networks as a new paradigm to enhance network performance. The emergence of new applications such as content distribution and location-aware advertisement introduced new use-cases for D2D communications in cellular networks. The initial studies showed that D2D communication has advantages such as increased spectral efficiency and reduced communication delay. However, this communication mode introduces complications in terms of interference control overhead and protocols that are still open research problems. The feasibility of D2D communications in LTE-A is being studied by academia, industry, and the standardization bodies. To date, there are more than 100 papers available on D2D communications in cellular networks and, there is no survey on this field. In this article, we provide a taxonomy based on the D2D communicating spectrum and review the available literature extensively under the proposed taxonomy. Moreover, we provide new insights to the over-explored and under-explored areas which lead us to identify open research problems of D2D communication in cellular networks.Comment: 18 pages; 8 figures; Accepted for publication in IEEE Communications Surveys and Tutorial

    Large-Scale Convex Optimization for Ultra-Dense Cloud-RAN

    Full text link
    The heterogeneous cloud radio access network (Cloud-RAN) provides a revolutionary way to densify radio access networks. It enables centralized coordination and signal processing for efficient interference management and flexible network adaptation. Thus, it can resolve the main challenges for next-generation wireless networks, including higher energy efficiency and spectral efficiency, higher cost efficiency, scalable connectivity, and low latency. In this article, we shall provide an algorithmic thinking on the new design challenges for the dense heterogeneous Cloud-RAN based on convex optimization. As problem sizes scale up with the network size, we will demonstrate that it is critical to take unique structures of design problems and inherent characteristics of wireless channels into consideration, while convex optimization will serve as a powerful tool for such purposes. Network power minimization and channel state information acquisition will be used as two typical examples to demonstrate the effectiveness of convex optimization methods. We will then present a two-stage framework to solve general large-scale convex optimization problems, which is amenable to parallel implementation in the cloud data center.Comment: to appear in IEEE Wireless Commun. Mag., June 201

    Joint Spectrum Allocation and Structure Optimization in Green Powered Heterogeneous Cognitive Radio Networks

    Full text link
    We aim at maximizing the sum rate of secondary users (SUs) in OFDM-based Heterogeneous Cognitive Radio (CR) Networks using RF energy harvesting. Assuming SUs operate in a time switching fashion, each time slot is partitioned into three non-overlapping parts devoted for energy harvesting, spectrum sensing and data transmission. The general problem of joint resource allocation and structure optimization is formulated as a Mixed Integer Nonlinear Programming task which is NP-hard and intractable. Thus, we propose to tackle it by decomposing it into two subproblems. We first propose a sub-channel allocation scheme to approximately satisfy SUs' rate requirements and remove the integer constraints. For the second step, we prove that the general optimization problem is reduced to a convex optimization task. Considering the trade-off among fractions of each time slot, we focus on optimizing the time slot structures of SUs that maximize the total throughput while guaranteeing the rate requirements of both real-time and non-real-time SUs. Since the reduced optimization problem does not have a simple closed-form solution, we thus propose a near optimal closed-form solution by utilizing Lambert-W function. We also exploit iterative gradient method based on Lagrangian dual decomposition to achieve near optimal solutions. Simulation results are presented to validate the optimality of the proposed schemes

    Recent Advances in Cloud Radio Access Networks: System Architectures, Key Techniques, and Open Issues

    Full text link
    As a promising paradigm to reduce both capital and operating expenditures, the cloud radio access network (C-RAN) has been shown to provide high spectral efficiency and energy efficiency. Motivated by its significant theoretical performance gains and potential advantages, C-RANs have been advocated by both the industry and research community. This paper comprehensively surveys the recent advances of C-RANs, including system architectures, key techniques, and open issues. The system architectures with different functional splits and the corresponding characteristics are comprehensively summarized and discussed. The state-of-the-art key techniques in C-RANs are classified as: the fronthaul compression, large-scale collaborative processing, and channel estimation in the physical layer; and the radio resource allocation and optimization in the upper layer. Additionally, given the extensiveness of the research area, open issues and challenges are presented to spur future investigations, in which the involvement of edge cache, big data mining, social-aware device-to-device, cognitive radio, software defined network, and physical layer security for C-RANs are discussed, and the progress of testbed development and trial test are introduced as well.Comment: 27 pages, 11 figure

    Device vs Edge Computing for Mobile Services: Delay-aware Decision Making to Minimize Power Consumption

    Full text link
    A promising technique to provide mobile applications with high computation resources is to offload the processing task to the cloud. Utilizing the abundant processing capabilities of the clouds, mobile edge computing enables mobile devices with limited batteries to run resource hungry applications and to save power. However, it is not always true that edge computing consumes less power compared to device computing. It may take more power for the mobile device to transmit a file to the cloud than running the task itself. This paper investigates the power minimization problem for the mobile devices by data offloading in multi-cell multi-user OFDMA mobile edge computing networks. We consider the maximum acceptable delay as QoS metric to be satisfied in our network. We formulate the problem as a mixed integer nonlinear problem which is converted into a convex form using D.C. approximation. To solve the converted optimization problem, we have proposed centralized and distributed algorithms for joint power allocation and channel assignment together with decision-making. Simulation results illustrate that by utilizing the proposed algorithms, considerable power savings can be achieved, e.g., about 60 % for large bit stream size compared to local computing baseline
    • …
    corecore