2,020 research outputs found

    Achieving Global Optimality for Weighted Sum-Rate Maximization in the K-User Gaussian Interference Channel with Multiple Antennas

    Full text link
    Characterizing the global maximum of weighted sum-rate (WSR) for the K-user Gaussian interference channel (GIC), with the interference treated as Gaussian noise, is a key problem in wireless communication. However, due to the users' mutual interference, this problem is in general non-convex and thus cannot be solved directly by conventional convex optimization techniques. In this paper, by jointly utilizing the monotonic optimization and rate profile techniques, we develop a new framework to obtain the globally optimal power control and/or beamforming solutions to the WSR maximization problems for the GICs with single-antenna transmitters and single-antenna receivers (SISO), single-antenna transmitters and multi-antenna receivers (SIMO), or multi-antenna transmitters and single-antenna receivers (MISO). Different from prior work, this paper proposes to maximize the WSR in the achievable rate region of the GIC directly by exploiting the facts that the achievable rate region is a "normal" set and the users' WSR is a "strictly increasing" function over the rate region. Consequently, the WSR maximization is shown to be in the form of monotonic optimization over a normal set and thus can be solved globally optimally by the existing outer polyblock approximation algorithm. However, an essential step in the algorithm hinges on how to efficiently characterize the intersection point on the Pareto boundary of the achievable rate region with any prescribed "rate profile" vector. This paper shows that such a problem can be transformed into a sequence of signal-to-interference-plus-noise ratio (SINR) feasibility problems, which can be solved efficiently by existing techniques. Numerical results validate that the proposed algorithms can achieve the global WSR maximum for the SISO, SIMO or MISO GIC.Comment: This is the longer version of a paper to appear in IEEE Transactions on Wireless Communication

    Ensuring DNN Solution Feasibility for Optimization Problems with Convex Constraints and Its Application to DC Optimal Power Flow Problems

    Full text link
    Ensuring solution feasibility is a key challenge in developing Deep Neural Network (DNN) schemes for solving constrained optimization problems, due to inherent DNN prediction errors. In this paper, we propose a "preventive learning'" framework to systematically guarantee DNN solution feasibility for problems with convex constraints and general objective functions. We first apply a predict-and-reconstruct design to not only guarantee equality constraints but also exploit them to reduce the number of variables to be predicted by DNN. Then, as a key methodological contribution, we systematically calibrate inequality constraints used in DNN training, thereby anticipating prediction errors and ensuring the resulting solutions remain feasible. We characterize the calibration magnitudes and the DNN size sufficient for ensuring universal feasibility. We propose a new Adversary-Sample Aware training algorithm to improve DNN's optimality performance without sacrificing feasibility guarantee. Overall, the framework provides two DNNs. The first one from characterizing the sufficient DNN size can guarantee universal feasibility while the other from the proposed training algorithm further improves optimality and maintains DNN's universal feasibility simultaneously. We apply the preventive learning framework to develop DeepOPF+ for solving the essential DC optimal power flow problem in grid operation. It improves over existing DNN-based schemes in ensuring feasibility and attaining consistent desirable speedup performance in both light-load and heavy-load regimes. Simulation results over IEEE Case-30/118/300 test cases show that DeepOPF+ generates 100%100\% feasible solutions with <<0.5% optimality loss and up to two orders of magnitude computational speedup, as compared to a state-of-the-art iterative solver.Comment: 43pages, 9 figures. In submissio

    Artificial Noise-Aided Biobjective Transmitter Optimization for Service Integration in Multi-User MIMO Gaussian Broadcast Channel

    Full text link
    This paper considers an artificial noise (AN)-aided transmit design for multi-user MIMO systems with integrated services. Specifically, two sorts of service messages are combined and served simultaneously: one multicast message intended for all receivers and one confidential message intended for only one receiver and required to be perfectly secure from other unauthorized receivers. Our interest lies in the joint design of input covariances of the multicast message, confidential message and artificial noise (AN), such that the achievable secrecy rate and multicast rate are simultaneously maximized. This problem is identified as a secrecy rate region maximization (SRRM) problem in the context of physical-layer service integration. Since this bi-objective optimization problem is inherently complex to solve, we put forward two different scalarization methods to convert it into a scalar optimization problem. First, we propose to prefix the multicast rate as a constant, and accordingly, the primal biobjective problem is converted into a secrecy rate maximization (SRM) problem with quality of multicast service (QoMS) constraint. By varying the constant, we can obtain different Pareto optimal points. The resulting SRM problem can be iteratively solved via a provably convergent difference-of-concave (DC) algorithm. In the second method, we aim to maximize the weighted sum of the secrecy rate and the multicast rate. Through varying the weighted vector, one can also obtain different Pareto optimal points. We show that this weighted sum rate maximization (WSRM) problem can be recast into a primal decomposable form, which is amenable to alternating optimization (AO). Then we compare these two scalarization methods in terms of their overall performance and computational complexity via theoretical analysis as well as numerical simulation, based on which new insights can be drawn.Comment: 14 pages, 5 figure

    Functional Inequalities in the Absence of Convexity and Lower Semicontinuity with Applications to Optimization

    Get PDF
    In this paper we extend some results in [Dinh, Goberna, López, and Volle, Set-Valued Var. Anal., to appear] to the setting of functional inequalities when the standard assumptions of convexity and lower semicontinuity of the involved mappings are absent. This extension is achieved under certain condition relative to the second conjugate of the involved functions. The main result of this paper, Theorem 1, is applied to derive some subdifferential calculus rules and different generalizations of the Farkas lemma for nonconvex systems, as well as some optimality conditions and duality theory for infinite nonconvex optimization problems. Several examples are given to illustrate the significance of the main results and also to point out the potential of their applications to get various extensions of Farkas-type results and to the study of other classes of problems such as variational inequalities and equilibrium models.This research was partially supported by MICINN of Spain, grant MTM2008-06695-C03-01

    DC Proximal Newton for Non-Convex Optimization Problems

    Get PDF
    We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are non-convex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm {admit} as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than current state of the art for a problem with a convex loss functions and non-convex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both loss function and regularizers are non-convex
    corecore