29 research outputs found
Expandergraphen und Derandomisierung
In der vorliegenden Arbeit wird der Artikel 'Derandomizing the Ahlswede-Winter matrix-valued Chernoff bound using pessimistic estimators, and applications' von Wigderson und Xiao detailliert nachvollzogen. Es wird aufgezeigt, wie der Beweis der matrixwertigen Chernoff-Ungleichung von Ahlswede und Winter verlĂ€uft. Mit diesen Ergebnissen und der Methode der pessimistischen SchĂ€tzer wird schlieĂlich der Beweis des Alon-Roichman-Theorems entrandomisiert
Pipage Rounding, Pessimistic Estimators and Matrix Concentration
Pipage rounding is a dependent random sampling technique that has several interesting properties and diverse applications. One property that has been particularly useful is negative correlation of the resulting vector. Unfortunately negative correlation has its limitations, and there are some further desirable properties that do not seem to follow from existing techniques. In particular, recent concentration results for sums of independent random matrices are not known to extend to a negatively dependent setting.
We introduce a simple but useful technique called concavity of pessimistic estimators. This technique allows us to show concentration of submodular functions and conc
A Matrix Hyperbolic Cosine Algorithm and Applications
In this paper, we generalize Spencer's hyperbolic cosine algorithm to the
matrix-valued setting. We apply the proposed algorithm to several problems by
analyzing its computational efficiency under two special cases of matrices; one
in which the matrices have a group structure and an other in which they have
rank-one. As an application of the former case, we present a deterministic
algorithm that, given the multiplication table of a finite group of size ,
it constructs an expanding Cayley graph of logarithmic degree in near-optimal
O(n^2 log^3 n) time. For the latter case, we present a fast deterministic
algorithm for spectral sparsification of positive semi-definite matrices, which
implies an improved deterministic algorithm for spectral graph sparsification
of dense graphs. In addition, we give an elementary connection between spectral
sparsification of positive semi-definite matrices and element-wise matrix
sparsification. As a consequence, we obtain improved element-wise
sparsification algorithms for diagonally dominant-like matrices.Comment: 16 pages, simplified proof and corrected acknowledging of prior work
in (current) Section
Sparse Sums of Positive Semidefinite Matrices
Recently there has been much interest in "sparsifying" sums of rank one
matrices: modifying the coefficients such that only a few are nonzero, while
approximately preserving the matrix that results from the sum. Results of this
sort have found applications in many different areas, including sparsifying
graphs. In this paper we consider the more general problem of sparsifying sums
of positive semidefinite matrices that have arbitrary rank.
We give several algorithms for solving this problem. The first algorithm is
based on the method of Batson, Spielman and Srivastava (2009). The second
algorithm is based on the matrix multiplicative weights update method of Arora
and Kale (2007). We also highlight an interesting connection between these two
algorithms.
Our algorithms have numerous applications. We show how they can be used to
construct graph sparsifiers with auxiliary constraints, sparsifiers of
hypergraphs, and sparse solutions to semidefinite programs
A Matrix Expander Chernoff Bound
We prove a Chernoff-type bound for sums of matrix-valued random variables
sampled via a random walk on an expander, confirming a conjecture due to
Wigderson and Xiao. Our proof is based on a new multi-matrix extension of the
Golden-Thompson inequality which improves in some ways the inequality of
Sutter, Berta, and Tomamichel, and may be of independent interest, as well as
an adaptation of an argument for the scalar case due to Healy. Secondarily, we
also provide a generic reduction showing that any concentration inequality for
vector-valued martingales implies a concentration inequality for the
corresponding expander walk, with a weakening of parameters proportional to the
squared mixing time.Comment: Fixed a minor bug in the proof of Theorem 3.
Low Rank Matrix-Valued Chernoff Bounds and Approximate Matrix Multiplication
In this paper we develop algorithms for approximating matrix multiplication
with respect to the spectral norm. Let A\in{\RR^{n\times m}} and B\in\RR^{n
\times p} be two matrices and \eps>0. We approximate the product A^\top B using
two down-sampled sketches, \tilde{A}\in\RR^{t\times m} and
\tilde{B}\in\RR^{t\times p}, where t\ll n such that \norm{\tilde{A}^\top
\tilde{B} - A^\top B} \leq \eps \norm{A}\norm{B} with high probability. We use
two different sampling procedures for constructing \tilde{A} and \tilde{B}; one
of them is done by i.i.d. non-uniform sampling rows from A and B and the other
is done by taking random linear combinations of their rows. We prove bounds
that depend only on the intrinsic dimensionality of A and B, that is their rank
and their stable rank; namely the squared ratio between their Frobenius and
operator norm. For achieving bounds that depend on rank we employ standard
tools from high-dimensional geometry such as concentration of measure arguments
combined with elaborate \eps-net constructions. For bounds that depend on the
smaller parameter of stable rank this technology itself seems weak. However, we
show that in combination with a simple truncation argument is amenable to
provide such bounds. To handle similar bounds for row sampling, we develop a
novel matrix-valued Chernoff bound inequality which we call low rank
matrix-valued Chernoff bound. Thanks to this inequality, we are able to give
bounds that depend only on the stable rank of the input matrices...Comment: 15 pages, To appear in 22nd ACM-SIAM Symposium on Discrete Algorithms
(SODA 2011
Approximation properties of certain operator-induced norms on Hilbert spaces
We consider a class of operator-induced norms, acting as finite-dimensional
surrogates to the L2 norm, and study their approximation properties over
Hilbert subspaces of L2 . The class includes, as a special case, the usual
empirical norm encountered, for example, in the context of nonparametric
regression in reproducing kernel Hilbert spaces (RKHS). Our results have
implications to the analysis of M-estimators in models based on
finite-dimensional linear approximation of functions, and also to some related
packing problems