88,214 research outputs found

    Globally Optimal Energy-Efficient Power Control and Receiver Design in Wireless Networks

    Full text link
    The characterization of the global maximum of energy efficiency (EE) problems in wireless networks is a challenging problem due to the non-convex nature of investigated problems in interference channels. The aim of this work is to develop a new and general framework to achieve globally optimal solutions. First, the hidden monotonic structure of the most common EE maximization problems is exploited jointly with fractional programming theory to obtain globally optimal solutions with exponential complexity in the number of network links. To overcome this issue, we also propose a framework to compute suboptimal power control strategies characterized by affordable complexity. This is achieved by merging fractional programming and sequential optimization. The proposed monotonic framework is used to shed light on the ultimate performance of wireless networks in terms of EE and also to benchmark the performance of the lower-complexity framework based on sequential programming. Numerical evidence is provided to show that the sequential fractional programming framework achieves global optimality in several practical communication scenarios.Comment: Accepted for publication in the IEEE Transactions on Signal Processin

    CoCoA: A General Framework for Communication-Efficient Distributed Optimization

    Get PDF
    The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets

    Energy-Efficient Power Control: A Look at 5G Wireless Technologies

    Get PDF
    This work develops power control algorithms for energy efficiency (EE) maximization (measured in bit/Joule) in wireless networks. Unlike previous related works, minimum-rate constraints are imposed and the signal-to-interference-plus-noise ratio takes a more general expression, which allows one to encompass some of the most promising 5G candidate technologies. Both network-centric and user-centric EE maximizations are considered. In the network-centric scenario, the maximization of the global EE and the minimum EE of the network are performed. Unlike previous contributions, we develop centralized algorithms that are guaranteed to converge, with affordable computational complexity, to a Karush-Kuhn-Tucker point of the considered non-convex optimization problems. Moreover, closed-form feasibility conditions are derived. In the user-centric scenario, game theory is used to study the equilibria of the network and to derive convergent power control algorithms, which can be implemented in a fully decentralized fashion. Both scenarios above are studied under the assumption that single or multiple resource blocks are employed for data transmission. Numerical results assess the performance of the proposed solutions, analyzing the impact of minimum-rate constraints, and comparing the network-centric and user-centric approaches.Comment: Accepted for Publication in the IEEE Transactions on Signal Processin

    L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework

    Full text link
    Despite the importance of sparsity in many large-scale applications, there are few methods for distributed optimization of sparsity-inducing objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in the distributed environment. By viewing classical objectives in a more general primal-dual setting, we develop a new class of methods that can be efficiently distributed and applied to common sparsity-inducing models, such as Lasso, sparse logistic regression, and elastic net-regularized problems. We provide theoretical convergence guarantees for our framework, and demonstrate its efficiency and flexibility with a thorough experimental comparison on Amazon EC2. Our proposed framework yields speedups of up to 50x as compared to current state-of-the-art methods for distributed L1-regularized optimization
    corecore