7,966 research outputs found
Geometric versions of the 3-dimensional assignment problem under general norms
We discuss the computational complexity of special cases of the 3-dimensional
(axial) assignment problem where the elements are points in a Cartesian space
and where the cost coefficients are the perimeters of the corresponding
triangles measured according to a certain norm. (All our results also carry
over to the corresponding special cases of the 3-dimensional matching problem.)
The minimization version is NP-hard for every norm, even if the underlying
Cartesian space is 2-dimensional. The maximization version is polynomially
solvable, if the dimension of the Cartesian space is fixed and if the
considered norm has a polyhedral unit ball. If the dimension of the Cartesian
space is part of the input, the maximization version is NP-hard for every
norm; in particular the problem is NP-hard for the Manhattan norm and the
Maximum norm which both have polyhedral unit balls.Comment: 21 pages, 9 figure
The Geometric Maximum Traveling Salesman Problem
We consider the traveling salesman problem when the cities are points in R^d
for some fixed d and distances are computed according to geometric distances,
determined by some norm. We show that for any polyhedral norm, the problem of
finding a tour of maximum length can be solved in polynomial time. If
arithmetic operations are assumed to take unit time, our algorithms run in time
O(n^{f-2} log n), where f is the number of facets of the polyhedron determining
the polyhedral norm. Thus for example we have O(n^2 log n) algorithms for the
cases of points in the plane under the Rectilinear and Sup norms. This is in
contrast to the fact that finding a minimum length tour in each case is
NP-hard. Our approach can be extended to the more general case of quasi-norms
with not necessarily symmetric unit ball, where we get a complexity of
O(n^{2f-2} log n).
For the special case of two-dimensional metrics with f=4 (which includes the
Rectilinear and Sup norms), we present a simple algorithm with O(n) running
time. The algorithm does not use any indirect addressing, so its running time
remains valid even in comparison based models in which sorting requires Omega(n
\log n) time. The basic mechanism of the algorithm provides some intuition on
why polyhedral norms allow fast algorithms.
Complementing the results on simplicity for polyhedral norms, we prove that
for the case of Euclidean distances in R^d for d>2, the Maximum TSP is NP-hard.
This sheds new light on the well-studied difficulties of Euclidean distances.Comment: 24 pages, 6 figures; revised to appear in Journal of the ACM.
(clarified some minor points, fixed typos
Geometry of Discrete Quantum Computing
Conventional quantum computing entails a geometry based on the description of
an n-qubit state using 2^{n} infinite precision complex numbers denoting a
vector in a Hilbert space. Such numbers are in general uncomputable using any
real-world resources, and, if we have the idea of physical law as some kind of
computational algorithm of the universe, we would be compelled to alter our
descriptions of physics to be consistent with computable numbers. Our purpose
here is to examine the geometric implications of using finite fields Fp and
finite complexified fields Fp^2 (based on primes p congruent to 3 mod{4}) as
the basis for computations in a theory of discrete quantum computing, which
would therefore become a computable theory. Because the states of a discrete
n-qubit system are in principle enumerable, we are able to determine the
proportions of entangled and unentangled states. In particular, we extend the
Hopf fibration that defines the irreducible state space of conventional
continuous n-qubit theories (which is the complex projective space CP{2^{n}-1})
to an analogous discrete geometry in which the Hopf circle for any n is found
to be a discrete set of p+1 points. The tally of unit-length n-qubit states is
given, and reduced via the generalized Hopf fibration to DCP{2^{n}-1}, the
discrete analog of the complex projective space, which has p^{2^{n}-1}
(p-1)\prod_{k=1}^{n-1} (p^{2^{k}}+1) irreducible states. Using a measure of
entanglement, the purity, we explore the entanglement features of discrete
quantum states and find that the n-qubit states based on the complexified field
Fp^2 have p^{n} (p-1)^{n} unentangled states (the product of the tally for a
single qubit) with purity 1, and they have p^{n+1}(p-1)(p+1)^{n-1} maximally
entangled states with purity zero.Comment: 24 page
Approximating Hereditary Discrepancy via Small Width Ellipsoids
The Discrepancy of a hypergraph is the minimum attainable value, over
two-colorings of its vertices, of the maximum absolute imbalance of any
hyperedge. The Hereditary Discrepancy of a hypergraph, defined as the maximum
discrepancy of a restriction of the hypergraph to a subset of its vertices, is
a measure of its complexity. Lovasz, Spencer and Vesztergombi (1986) related
the natural extension of this quantity to matrices to rounding algorithms for
linear programs, and gave a determinant based lower bound on the hereditary
discrepancy. Matousek (2011) showed that this bound is tight up to a
polylogarithmic factor, leaving open the question of actually computing this
bound. Recent work by Nikolov, Talwar and Zhang (2013) showed a polynomial time
-approximation to hereditary discrepancy, as a by-product
of their work in differential privacy. In this paper, we give a direct simple
-approximation algorithm for this problem. We show that up to
this approximation factor, the hereditary discrepancy of a matrix is
characterized by the optimal value of simple geometric convex program that
seeks to minimize the largest norm of any point in a ellipsoid
containing the columns of . This characterization promises to be a useful
tool in discrepancy theory
- …