2,568 research outputs found

    On the Lattice Distortion Problem

    Get PDF
    We introduce and study the \emph{Lattice Distortion Problem} (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a nO(logn)n^{O(\log n)} factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a 2O(nloglogn/logn)2^{O(n \log \log n/\log n)} factor of optimal in polynomial time and within a nO(logn)n^{O(\log n)} factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.Comment: This is the full version of a paper that appeared in ESA 201

    Parameterization Above a Multiplicative Guarantee

    Get PDF
    Parameterization above a guarantee is a successful paradigm in Parameterized Complexity. To the best of our knowledge, all fixed-parameter tractable problems in this paradigm share an additive form defined as follows. Given an instance (I,k) of some (parameterized) problem ? with a guarantee g(I), decide whether I admits a solution of size at least (at most) k+g(I). Here, g(I) is usually a lower bound (resp. upper bound) on the maximum (resp. minimum) size of a solution. Since its introduction in 1999 for Max SAT and Max Cut (with g(I) being half the number of clauses and half the number of edges, respectively, in the input), analysis of parameterization above a guarantee has become a very active and fruitful topic of research. We highlight a multiplicative form of parameterization above a guarantee: Given an instance (I,k) of some (parameterized) problem ? with a guarantee g(I), decide whether I admits a solution of size at least (resp. at most) k ? g(I). In particular, we study the Long Cycle problem with a multiplicative parameterization above the girth g(I) of the input graph, and provide a parameterized algorithm for this problem. Apart from being of independent interest, this exemplifies how parameterization above a multiplicative guarantee can arise naturally. We also show that, for any fixed constant ?>0, multiplicative parameterization above g(I)^(1+?) of Long Cycle yields para-NP-hardness, thus our parameterization is tight in this sense. We complement our main result with the design (or refutation of the existence) of algorithms for other problems parameterized multiplicatively above girth

    Synthesis of sup-interpretations: a survey

    Get PDF
    In this paper, we survey the complexity of distinct methods that allow the programmer to synthesize a sup-interpretation, a function providing an upper- bound on the size of the output values computed by a program. It consists in a static space analysis tool without consideration of the time consumption. Although clearly related, sup-interpretation is independent from termination since it only provides an upper bound on the terminating computations. First, we study some undecidable properties of sup-interpretations from a theoretical point of view. Next, we fix term rewriting systems as our computational model and we show that a sup-interpretation can be obtained through the use of a well-known termination technique, the polynomial interpretations. The drawback is that such a method only applies to total functions (strongly normalizing programs). To overcome this problem we also study sup-interpretations through the notion of quasi-interpretation. Quasi-interpretations also suffer from a drawback that lies in the subterm property. This property drastically restricts the shape of the considered functions. Again we overcome this problem by introducing a new notion of interpretations mainly based on the dependency pairs method. We study the decidability and complexity of the sup-interpretation synthesis problem for all these three tools over sets of polynomials. Finally, we take benefit of some previous works on termination and runtime complexity to infer sup-interpretations.Comment: (2012

    An FPTAS for optimizing a class of low-rank functions over a polytope

    Get PDF
    We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of non-linear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Pareto-optimal front of the linear functions which constitute the given low-rank function. In contrast to existing results in the literature, our approximation scheme does not require the assumption of quasi-concavity on the objective function. For the special case of quasi-concave function minimization, we give an alternative FPTAS, which always returns a solution which is an extreme point of the polytope. Our technique can also be used to obtain an FPTAS for combinatorial optimization problems with non-linear objective functions, for example when the objective is a product of a fixed number of linear functions. We also show that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP. We prove this by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest

    The Power of Quantum Fourier Sampling

    Get PDF
    A line of work initiated by Terhal and DiVincenzo and Bremner, Jozsa, and Shepherd, shows that quantum computers can efficiently sample from probability distributions that cannot be exactly sampled efficiently on a classical computer, unless the PH collapses. Aaronson and Arkhipov take this further by considering a distribution that can be sampled efficiently by linear optical quantum computation, that under two feasible conjectures, cannot even be approximately sampled classically within bounded total variation distance, unless the PH collapses. In this work we use Quantum Fourier Sampling to construct a class of distributions that can be sampled by a quantum computer. We then argue that these distributions cannot be approximately sampled classically, unless the PH collapses, under variants of the Aaronson and Arkhipov conjectures. In particular, we show a general class of quantumly sampleable distributions each of which is based on an "Efficiently Specifiable" polynomial, for which a classical approximate sampler implies an average-case approximation. This class of polynomials contains the Permanent but also includes, for example, the Hamiltonian Cycle polynomial, and many other familiar #P-hard polynomials. Although our construction, unlike that proposed by Aaronson and Arkhipov, likely requires a universal quantum computer, we are able to use this additional power to weaken the conjectures needed to prove approximate sampling hardness results
    corecore