2,554 research outputs found
Approximating Hereditary Discrepancy via Small Width Ellipsoids
The Discrepancy of a hypergraph is the minimum attainable value, over
two-colorings of its vertices, of the maximum absolute imbalance of any
hyperedge. The Hereditary Discrepancy of a hypergraph, defined as the maximum
discrepancy of a restriction of the hypergraph to a subset of its vertices, is
a measure of its complexity. Lovasz, Spencer and Vesztergombi (1986) related
the natural extension of this quantity to matrices to rounding algorithms for
linear programs, and gave a determinant based lower bound on the hereditary
discrepancy. Matousek (2011) showed that this bound is tight up to a
polylogarithmic factor, leaving open the question of actually computing this
bound. Recent work by Nikolov, Talwar and Zhang (2013) showed a polynomial time
-approximation to hereditary discrepancy, as a by-product
of their work in differential privacy. In this paper, we give a direct simple
-approximation algorithm for this problem. We show that up to
this approximation factor, the hereditary discrepancy of a matrix is
characterized by the optimal value of simple geometric convex program that
seeks to minimize the largest norm of any point in a ellipsoid
containing the columns of . This characterization promises to be a useful
tool in discrepancy theory
Towards a Constructive Version of Banaszczyk's Vector Balancing Theorem
An important theorem of Banaszczyk (Random Structures & Algorithms `98)
states that for any sequence of vectors of norm at most and any
convex body of Gaussian measure in , there exists a
signed combination of these vectors which lands inside . A major open
problem is to devise a constructive version of Banaszczyk's vector balancing
theorem, i.e. to find an efficient algorithm which constructs the signed
combination.
We make progress towards this goal along several fronts. As our first
contribution, we show an equivalence between Banaszczyk's theorem and the
existence of -subgaussian distributions over signed combinations. For the
case of symmetric convex bodies, our equivalence implies the existence of a
universal signing algorithm (i.e. independent of the body), which simply
samples from the subgaussian sign distribution and checks to see if the
associated combination lands inside the body. For asymmetric convex bodies, we
provide a novel recentering procedure, which allows us to reduce to the case
where the body is symmetric.
As our second main contribution, we show that the above framework can be
efficiently implemented when the vectors have length ,
recovering Banaszczyk's results under this stronger assumption. More precisely,
we use random walk techniques to produce the required -subgaussian
signing distributions when the vectors have length , and
use a stochastic gradient ascent method to implement the recentering procedure
for asymmetric bodies
An Algorithm for Koml\'os Conjecture Matching Banaszczyk's bound
We consider the problem of finding a low discrepancy coloring for sparse set
systems where each element lies in at most t sets. We give an efficient
algorithm that finds a coloring with discrepancy O((t log n)^{1/2}), matching
the best known non-constructive bound for the problem due to Banaszczyk. The
previous algorithms only achieved an O(t^{1/2} log n) bound. The result also
extends to the more general Koml\'{o}s setting and gives an algorithmic
O(log^{1/2} n) bound
Necessary conditions for variational regularization schemes
We study variational regularization methods in a general framework, more
precisely those methods that use a discrepancy and a regularization functional.
While several sets of sufficient conditions are known to obtain a
regularization method, we start with an investigation of the converse question:
How could necessary conditions for a variational method to provide a
regularization method look like? To this end, we formalize the notion of a
variational scheme and start with comparison of three different instances of
variational methods. Then we focus on the data space model and investigate the
role and interplay of the topological structure, the convergence notion and the
discrepancy functional. Especially, we deduce necessary conditions for the
discrepancy functional to fulfill usual continuity assumptions. The results are
applied to discrepancy functionals given by Bregman distances and especially to
the Kullback-Leibler divergence.Comment: To appear in Inverse Problem
Towards a Constructive Version of Banaszczyk\u27s Vector Balancing Theorem
An important theorem of Banaszczyk (Random Structures & Algorithms 1998) states that for any sequence of vectors of l_2 norm at most 1/5 and any convex body K of Gaussian measure 1/2 in R^n, there exists a signed combination of these vectors which lands inside K. A major open problem is to devise a constructive version of Banaszczyk\u27s vector balancing theorem, i.e. to find an efficient algorithm which constructs the signed combination.
We make progress towards this goal along several fronts. As our first contribution, we show an equivalence between Banaszczyk\u27s theorem and the existence of O(1)-subgaussian distributions over signed combinations. For the case of symmetric convex bodies, our equivalence implies the existence of a universal signing algorithm (i.e. independent of the body), which simply samples from the subgaussian sign distribution and checks to see if the associated combination lands inside the body. For asymmetric convex bodies, we provide a novel recentering procedure, which allows us to reduce to the case where the body is symmetric.
As our second main contribution, we show that the above framework can be efficiently implemented when the vectors have length O(1/sqrt{log n}), recovering Banaszczyk\u27s results under this stronger assumption. More precisely, we use random walk techniques to produce the required O(1)-subgaussian signing distributions when the vectors have length O(1/sqrt{log n}), and use a stochastic gradient ascent method to implement the recentering procedure for asymmetric bodies
An additive subfamily of enlargements of a maximally monotone operator
We introduce a subfamily of additive enlargements of a maximally monotone
operator. Our definition is inspired by the early work of Simon Fitzpatrick.
These enlargements constitute a subfamily of the family of enlargements
introduced by Svaiter. When the operator under consideration is the
subdifferential of a convex lower semicontinuous proper function, we prove that
some members of the subfamily are smaller than the classical
-subdifferential enlargement widely used in convex analysis. We also
recover the epsilon-subdifferential within the subfamily. Since they are all
additive, the enlargements in our subfamily can be seen as structurally closer
to the -subdifferential enlargement
- …