18 research outputs found
Matrix Infinitely Divisible Series: Tail Inequalities and Applications in Optimization
In this paper, we study tail inequalities of the largest eigenvalue of a
matrix infinitely divisible (i.d.) series, which is a finite sum of fixed
matrices weighted by i.d. random variables. We obtain several types of tail
inequalities, including Bennett-type and Bernstein-type inequalities. This
allows us to further bound the expectation of the spectral norm of a matrix
i.d. series. Moreover, by developing a new lower-bound function for
that appears in the Bennett-type inequality, we derive
a tighter tail inequality of the largest eigenvalue of the matrix i.d. series
than the Bernstein-type inequality when the matrix dimension is high. The
resulting lower-bound function is of independent interest and can improve any
Bennett-type concentration inequality that involves the function . The
class of i.d. probability distributions is large and includes Gaussian and
Poisson distributions, among many others. Therefore, our results encompass the
existing work \cite{tropp2012user} on matrix Gaussian series as a special case.
Lastly, we show that the tail inequalities of a matrix i.d. series have
applications in several optimization problems including the chance constrained
optimization problem and the quadratic optimization problem with orthogonality
constraints.Comment: Comments Welcome
A Matrix Hyperbolic Cosine Algorithm and Applications
In this paper, we generalize Spencer's hyperbolic cosine algorithm to the
matrix-valued setting. We apply the proposed algorithm to several problems by
analyzing its computational efficiency under two special cases of matrices; one
in which the matrices have a group structure and an other in which they have
rank-one. As an application of the former case, we present a deterministic
algorithm that, given the multiplication table of a finite group of size ,
it constructs an expanding Cayley graph of logarithmic degree in near-optimal
O(n^2 log^3 n) time. For the latter case, we present a fast deterministic
algorithm for spectral sparsification of positive semi-definite matrices, which
implies an improved deterministic algorithm for spectral graph sparsification
of dense graphs. In addition, we give an elementary connection between spectral
sparsification of positive semi-definite matrices and element-wise matrix
sparsification. As a consequence, we obtain improved element-wise
sparsification algorithms for diagonally dominant-like matrices.Comment: 16 pages, simplified proof and corrected acknowledging of prior work
in (current) Section
Sample Approximation-Based Deflation Approaches for Chance SINR Constrained Joint Power and Admission Control
Consider the joint power and admission control (JPAC) problem for a
multi-user single-input single-output (SISO) interference channel. Most
existing works on JPAC assume the perfect instantaneous channel state
information (CSI). In this paper, we consider the JPAC problem with the
imperfect CSI, that is, we assume that only the channel distribution
information (CDI) is available. We formulate the JPAC problem into a chance
(probabilistic) constrained program, where each link's SINR outage probability
is enforced to be less than or equal to a specified tolerance. To circumvent
the computational difficulty of the chance SINR constraints, we propose to use
the sample (scenario) approximation scheme to convert them into finitely many
simple linear constraints. Furthermore, we reformulate the sample approximation
of the chance SINR constrained JPAC problem as a composite group sparse
minimization problem and then approximate it by a second-order cone program
(SOCP). The solution of the SOCP approximation can be used to check the
simultaneous supportability of all links in the network and to guide an
iterative link removal procedure (the deflation approach). We exploit the
special structure of the SOCP approximation and custom-design an efficient
algorithm for solving it. Finally, we illustrate the effectiveness and
efficiency of the proposed sample approximation-based deflation approaches by
simulations.Comment: The paper has been accepted for publication in IEEE Transactions on
Wireless Communication
Tail bounds for all eigenvalues of a sum of random matrices
This work introduces the minimax Laplace transform method, a modification of
the cumulant-based matrix Laplace transform method developed in "User-friendly
tail bounds for sums of random matrices" (arXiv:1004.4389v6) that yields both
upper and lower bounds on each eigenvalue of a sum of random self-adjoint
matrices. This machinery is used to derive eigenvalue analogues of the
classical Chernoff, Bennett, and Bernstein bounds.
Two examples demonstrate the efficacy of the minimax Laplace transform. The
first concerns the effects of column sparsification on the spectrum of a matrix
with orthonormal rows. Here, the behavior of the singular values can be
described in terms of coherence-like quantities. The second example addresses
the question of relative accuracy in the estimation of eigenvalues of the
covariance matrix of a random process. Standard results on the convergence of
sample covariance matrices provide bounds on the number of samples needed to
obtain relative accuracy in the spectral norm, but these results only guarantee
relative accuracy in the estimate of the maximum eigenvalue. The minimax
Laplace transform argument establishes that if the lowest eigenvalues decay
sufficiently fast, on the order of (K^2*r*log(p))/eps^2 samples, where K is the
condition number of an optimal rank-r approximation to C, are sufficient to
ensure that the dominant r eigenvalues of the covariance matrix of a N(0, C)
random vector are estimated to within a factor of 1+-eps with high probability.Comment: 20 pages, 1 figure, see also arXiv:1004.4389v
Approximating the Little Grothendieck Problem over the Orthogonal and Unitary Groups
The little Grothendieck problem consists of maximizing
over binary variables , where C is a
positive semidefinite matrix. In this paper we focus on a natural
generalization of this problem, the little Grothendieck problem over the
orthogonal group. Given C a dn x dn positive semidefinite matrix, the objective
is to maximize restricting to take
values in the group of orthogonal matrices, where denotes the (ij)-th
d x d block of C. We propose an approximation algorithm, which we refer to as
Orthogonal-Cut, to solve this problem and show a constant approximation ratio.
Our method is based on semidefinite programming. For a given , we show
a constant approximation ratio of , where is
the expected average singular value of a d x d matrix with random Gaussian
i.i.d. entries. For d=1 we recover the known
approximation guarantee for the classical little Grothendieck problem. Our
algorithm and analysis naturally extends to the complex valued case also
providing a constant approximation ratio for the analogous problem over the
Unitary Group.
Orthogonal-Cut also serves as an approximation algorithm for several
applications, including the Procrustes problem where it improves over the best
previously known approximation ratio of~. The little
Grothendieck problem falls under the class of problems approximated by a recent
algorithm proposed in the context of the non-commutative Grothendieck
inequality. Nonetheless, our approach is simpler and it provides a more
efficient algorithm with better approximation ratios and matching integrality
gaps.
Finally, we also provide an improved approximation algorithm for the more
general little Grothendieck problem over the orthogonal (or unitary) group with
rank constraints.Comment: Updates in version 2: extension to the complex valued (unitary group)
case, sharper lower bounds on the approximation ratios, matching integrality
gap, and a generalized rank constrained version of the problem. Updates in
version 3: Improvement on the expositio
User-friendly tail bounds for sums of random matrices
This paper presents new probability inequalities for sums of independent,
random, self-adjoint matrices. These results place simple and easily verifiable
hypotheses on the summands, and they deliver strong conclusions about the
large-deviation behavior of the maximum eigenvalue of the sum. Tail bounds for
the norm of a sum of random rectangular matrices follow as an immediate
corollary. The proof techniques also yield some information about matrix-valued
martingales.
In other words, this paper provides noncommutative generalizations of the
classical bounds associated with the names Azuma, Bennett, Bernstein, Chernoff,
Hoeffding, and McDiarmid. The matrix inequalities promise the same diversity of
application, ease of use, and strength of conclusion that have made the scalar
inequalities so valuable.Comment: Current paper is the version of record. The material on Freedman's
inequality has been moved to a separate note; other martingale bounds are
described in Caltech ACM Report 2011-0