456 research outputs found
Impossibility of dimension reduction in the nuclear norm
Let (the Schatten--von Neumann trace class) denote the Banach
space of all compact linear operators whose nuclear norm
is finite, where
are the singular values of . We prove that
for arbitrarily large there exists a subset
with that cannot be
embedded with bi-Lipschitz distortion into any -dimensional
linear subspace of . is not even a -Lipschitz
quotient of any subset of any -dimensional linear subspace of
. Thus, does not admit a dimension reduction
result \'a la Johnson and Lindenstrauss (1984), which complements the work of
Harrow, Montanaro and Short (2011) on the limitations of quantum dimension
reduction under the assumption that the embedding into low dimensions is a
quantum channel. Such a statement was previously known with
replaced by the Banach space of absolutely summable sequences via the
work of Brinkman and Charikar (2003). In fact, the above set can
be taken to be the same set as the one that Brinkman and Charikar considered,
viewed as a collection of diagonal matrices in . The challenge is
to demonstrate that cannot be faithfully realized in an arbitrary
low-dimensional subspace of , while Brinkman and Charikar
obtained such an assertion only for subspaces of that consist of
diagonal operators (i.e., subspaces of ). We establish this by proving
that the Markov 2-convexity constant of any finite dimensional linear subspace
of is at most a universal constant multiple of
Hardness Amplification of Optimization Problems
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows:
If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27.
As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium
Distributed Averaging via Lifted Markov Chains
Motivated by applications of distributed linear estimation, distributed
control and distributed optimization, we consider the question of designing
linear iterative algorithms for computing the average of numbers in a network.
Specifically, our interest is in designing such an algorithm with the fastest
rate of convergence given the topological constraints of the network. As the
main result of this paper, we design an algorithm with the fastest possible
rate of convergence using a non-reversible Markov chain on the given network
graph. We construct such a Markov chain by transforming the standard Markov
chain, which is obtained using the Metropolis-Hastings method. We call this
novel transformation pseudo-lifting. We apply our method to graphs with
geometry, or graphs with doubling dimension. Specifically, the convergence time
of our algorithm (equivalently, the mixing time of our Markov chain) is
proportional to the diameter of the network graph and hence optimal. As a
byproduct, our result provides the fastest mixing Markov chain given the
network topological constraints, and should naturally find their applications
in the context of distributed optimization, estimation and control
On Strong Diameter Padded Decompositions
Given a weighted graph G=(V,E,w), a partition of V is Delta-bounded if the diameter of each cluster is bounded by Delta. A distribution over Delta-bounded partitions is a beta-padded decomposition if every ball of radius gamma Delta is contained in a single cluster with probability at least e^{-beta * gamma}. The weak diameter of a cluster C is measured w.r.t. distances in G, while the strong diameter is measured w.r.t. distances in the induced graph G[C]. The decomposition is weak/strong according to the diameter guarantee.
Formerly, it was proven that K_r free graphs admit weak decompositions with padding parameter O(r), while for strong decompositions only O(r^2) padding parameter was known. Furthermore, for the case of a graph G, for which the induced shortest path metric d_G has doubling dimension ddim, a weak O(ddim)-padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known.
We construct strong O(r)-padded decompositions for K_r free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension ddim we construct a strong O(ddim)-padded decomposition, which is also tight. We use this decomposition to construct (O(ddim),O~(ddim))-sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles
Recommended from our members
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
- …