456 research outputs found
Alternating direction method of multipliers for penalized zero-variance discriminant analysis
We consider the task of classification in the high dimensional setting where
the number of features of the given data is significantly greater than the
number of observations. To accomplish this task, we propose a heuristic, called
sparse zero-variance discriminant analysis (SZVD), for simultaneously
performing linear discriminant analysis and feature selection on high
dimensional data. This method combines classical zero-variance discriminant
analysis, where discriminant vectors are identified in the null space of the
sample within-class covariance matrix, with penalization applied to induce
sparse structures in the resulting vectors. To approximately solve the
resulting nonconvex problem, we develop a simple algorithm based on the
alternating direction method of multipliers. Further, we show that this
algorithm is applicable to a larger class of penalized generalized eigenvalue
problems, including a particular relaxation of the sparse principal component
analysis problem. Finally, we establish theoretical guarantees for convergence
of our algorithm to stationary points of the original nonconvex problem, and
empirically demonstrate the effectiveness of our heuristic for classifying
simulated data and data drawn from applications in time-series classification
Quantized Consensus ADMM for Multi-Agent Distributed Optimization
Multi-agent distributed optimization over a network minimizes a global
objective formed by a sum of local convex functions using only local
computation and communication. We develop and analyze a quantized distributed
algorithm based on the alternating direction method of multipliers (ADMM) when
inter-agent communications are subject to finite capacity and other practical
constraints. While existing quantized ADMM approaches only work for quadratic
local objectives, the proposed algorithm can deal with more general objective
functions (possibly non-smooth) including the LASSO. Under certain convexity
assumptions, our algorithm converges to a consensus within
iterations, where depends on the local
objectives and the network topology, and is a polynomial determined by
the quantization resolution, the distance between initial and optimal variable
values, the local objective functions and the network topology. A tight upper
bound on the consensus error is also obtained which does not depend on the size
of the network.Comment: 30 pages, 4 figures; to be submitted to IEEE Trans. Signal
Processing. arXiv admin note: text overlap with arXiv:1307.5561 by other
author
Sample Approximation-Based Deflation Approaches for Chance SINR Constrained Joint Power and Admission Control
Consider the joint power and admission control (JPAC) problem for a
multi-user single-input single-output (SISO) interference channel. Most
existing works on JPAC assume the perfect instantaneous channel state
information (CSI). In this paper, we consider the JPAC problem with the
imperfect CSI, that is, we assume that only the channel distribution
information (CDI) is available. We formulate the JPAC problem into a chance
(probabilistic) constrained program, where each link's SINR outage probability
is enforced to be less than or equal to a specified tolerance. To circumvent
the computational difficulty of the chance SINR constraints, we propose to use
the sample (scenario) approximation scheme to convert them into finitely many
simple linear constraints. Furthermore, we reformulate the sample approximation
of the chance SINR constrained JPAC problem as a composite group sparse
minimization problem and then approximate it by a second-order cone program
(SOCP). The solution of the SOCP approximation can be used to check the
simultaneous supportability of all links in the network and to guide an
iterative link removal procedure (the deflation approach). We exploit the
special structure of the SOCP approximation and custom-design an efficient
algorithm for solving it. Finally, we illustrate the effectiveness and
efficiency of the proposed sample approximation-based deflation approaches by
simulations.Comment: The paper has been accepted for publication in IEEE Transactions on
Wireless Communication
Multi-Agent Distributed Optimization via Inexact Consensus ADMM
Multi-agent distributed consensus optimization problems arise in many signal
processing applications. Recently, the alternating direction method of
multipliers (ADMM) has been used for solving this family of problems. ADMM
based distributed optimization method is shown to have faster convergence rate
compared with classic methods based on consensus subgradient, but can be
computationally expensive, especially for problems with complicated structures
or large dimensions. In this paper, we propose low-complexity algorithms that
can reduce the overall computational cost of consensus ADMM by an order of
magnitude for certain large-scale problems. Central to the proposed algorithms
is the use of an inexact step for each ADMM update, which enables the agents to
perform cheap computation at each iteration. Our convergence analyses show that
the proposed methods converge well under some convexity assumptions. Numerical
results show that the proposed algorithms offer considerably lower
computational complexity than the standard ADMM based distributed optimization
methods.Comment: submitted to IEEE Trans. Signal Processing; Revised April 2014 and
August 201
Decomposition by Successive Convex Approximation: A Unifying Approach for Linear Transceiver Design in Heterogeneous Networks
We study the downlink linear precoder design problem in a multi-cell dense
heterogeneous network (HetNet). The problem is formulated as a general
sum-utility maximization (SUM) problem, which includes as special cases many
practical precoder design problems such as multi-cell coordinated linear
precoding, full and partial per-cell coordinated multi-point transmission,
zero-forcing precoding and joint BS clustering and beamforming/precoding. The
SUM problem is difficult due to its non-convexity and the tight coupling of the
users' precoders. In this paper we propose a novel convex approximation
technique to approximate the original problem by a series of convex
subproblems, each of which decomposes across all the cells. The convexity of
the subproblems allows for efficient computation, while their decomposability
leads to distributed implementation. {Our approach hinges upon the
identification of certain key convexity properties of the sum-utility
objective, which allows us to transform the problem into a form that can be
solved using a popular algorithmic framework called BSUM (Block Successive
Upper-Bound Minimization).} Simulation experiments show that the proposed
framework is effective for solving interference management problems in large
HetNet.Comment: Accepted by IEEE Transactions on Wireless Communicatio
Joint Downlink Base Station Association and Power Control for Max-Min Fairness: Computation and Complexity
In a heterogeneous network (HetNet) with a large number of low power base
stations (BSs), proper user-BS association and power control is crucial to
achieving desirable system performance. In this paper, we systematically study
the joint BS association and power allocation problem for a downlink cellular
network under the max-min fairness criterion. First, we show that this problem
is NP-hard. Second, we show that the upper bound of the optimal value can be
easily computed, and propose a two-stage algorithm to find a high-quality
suboptimal solution. Simulation results show that the proposed algorithm is
near-optimal in the high-SNR regime. Third, we show that the problem under some
additional mild assumptions can be solved to global optima in polynomial time
by a semi-distributed algorithm. This result is based on a transformation of
the original problem to an assignment problem with gains , where
are the channel gains.Comment: 24 pages, 7 figures, a shorter version submitted to IEEE JSA
NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization
We study a stochastic and distributed algorithm for nonconvex problems whose
objective consists of a sum of nonconvex -smooth functions, plus a
nonsmooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT)
algorithm splits the problem into subproblems, and utilizes an augmented
Lagrangian based primal-dual scheme to solve it in a distributed and stochastic
manner. With a special non-uniform sampling, a version of NESTT achieves
-stationary solution using
gradient evaluations,
which can be up to times better than the (proximal) gradient
descent methods. It also achieves Q-linear convergence rate for nonconvex
penalized quadratic problems with polyhedral constraints. Further, we
reveal a fundamental connection between primal-dual based methods and a few
primal only methods such as IAG/SAG/SAGA.Comment: 35 pages, 2 figure
A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality
Symmetric nonnegative matrix factorization (SymNMF) has important
applications in data analytics problems such as document clustering, community
detection and image segmentation. In this paper, we propose a novel nonconvex
variable splitting method for solving SymNMF. The proposed algorithm is
guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points of the
nonconvex SymNMF problem. Furthermore, it achieves a global sublinear
convergence rate. We also show that the algorithm can be efficiently
implemented in parallel. Further, sufficient conditions are provided which
guarantee the global and local optimality of the obtained solutions. Extensive
numerical results performed on both synthetic and real data sets suggest that
the proposed algorithm converges quickly to a local minimum solution.Comment: IEEE Transactions on Signal Processing (to appear
Iteration Complexity Analysis of Block Coordinate Descent Methods
In this paper, we provide a unified iteration complexity analysis for a
family of general block coordinate descent (BCD) methods, covering popular
methods such as the block coordinate gradient descent (BCGD) and the block
coordinate proximal gradient (BCPG), under various different coordinate update
rules. We unify these algorithms under the so-called Block Successive
Upper-bound Minimization (BSUM) framework, and show that for a broad class of
multi-block nonsmooth convex problems, all algorithms covered by the BSUM
framework achieve a global sublinear iteration complexity of , where r
is the iteration index. Moreover, for the case of block coordinate minimization
(BCM) where each block is minimized exactly, we establish the sublinear
convergence rate of without per block strong convexity assumption.
Further, we show that when there are only two blocks of variables, a special
BSUM algorithm with Gauss-Seidel rule can be accelerated to achieve an improved
rate of
- …