3 research outputs found

    Smoothing the gap between NP and ER

    Get PDF
    We study algorithmic problems that belong to the complexity class of the existential theory of the reals (ER). A problem is ER-complete if it is as hard as the problem ETR and if it can be written as an ETR formula. Traditionally, these problems are studied in the real RAM, a model of computation that assumes that the storage and comparison of real-valued numbers can be done in constant space and time, with infinite precision. The complexity class ER is often called a real RAM analogue of NP, since the problem ETR can be viewed as the real-valued variant of SAT. In this paper we prove a real RAM analogue to the Cook-Levin theorem which shows that ER membership is equivalent to having a verification algorithm that runs in polynomial-time on a real RAM. This gives an easy proof of ER-membership, as verification algorithms on a real RAM are much more versatile than ETR-formulas. We use this result to construct a framework to study ER-complete problems under smoothed analysis. We show that for a wide class of ER-complete problems, its witness can be represented with logarithmic input-precision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and ER (formed by inputs whose solution witness needs high input-precision) consists of contrived input. We apply our framework to well-studied ER-complete recognition problems which have the exponential bit phenomenon such as the recognition of realizable order types or the Steinitz problem in fixed dimension.Comment: 31 pages, 11 figures, FOCS 2020, SICOMP 202

    A Framework for Robust Realistic Geometric Computations

    No full text
    We propose a new paradigm for robust geometric computations that complements the classical fixed precision paradigm and the exact geometric computation paradigm. We provide a framework where we study algorithmic problems under smoothed analysis of the input, the relaxation of the problem requirements, or the witness of a recognition problem. Our framework specifies a widely applicable set of prerequisites that make real RAM algorithms suitable for smoothed analysis. We prove that suitable algorithms can (under smoothed analysis) be robustly executed with expected logarithmic bit-precision. This shows in a formal way that inputs which need high bit-precision are contrived and that these algorithms are likely robust for realistic input. Interestingly our techniques generalize to problems with a natural notion of resource augmentation (geometric packing, the art gallery problem) and recognition problems (recognition of realizable order types or disk intersection graphs). Our results also have theoretical implications for some ER-hard problems: These problems have input instances where their real verification algorithm requires at least exponential bit-precision which makes it difficult to place these ER-hard problems in NP. Our results imply for a host of ER-complete problems that this exponential bit-precision phenomenon comes from nearly degenerate instances. It is not evident that problems that have a real verification algorithm belong to ER. Therefore, we conclude with a real RAM analogue to the Cook-Levin Theorem. This gives an easy proof of ER-membership, as real verification algorithms are much more versatile than ETR-formulas

    A Framework for Robust Realistic Geometric Computations

    No full text
    We propose a new paradigm for robust geometric computations that complements the classical fixed precision paradigm and the exact geometric computation paradigm. We provide a framework where we study algorithmic problems under smoothed analysis of the input, the relaxation of the problem requirements, or the witness of a recognition problem. Our framework specifies a widely applicable set of prerequisites that make real RAM algorithms suitable for smoothed analysis. We prove that suitable algorithms can (under smoothed analysis) be robustly executed with expected logarithmic bit-precision. This shows in a formal way that inputs which need high bit-precision are contrived and that these algorithms are likely robust for realistic input. Interestingly our techniques generalize to problems with a natural notion of resource augmentation (geometric packing, the art gallery problem) and recognition problems (recognition of realizable order types or disk intersection graphs). Our results also have theoretical implications for some ER-hard problems: These problems have input instances where their real verification algorithm requires at least exponential bit-precision which makes it difficult to place these ER-hard problems in NP. Our results imply for a host of ER-complete problems that this exponential bit-precision phenomenon comes from nearly degenerate instances. It is not evident that problems that have a real verification algorithm belong to ER. Therefore, we conclude with a real RAM analogue to the Cook-Levin Theorem. This gives an easy proof of ER-membership, as real verification algorithms are much more versatile than ETR-formulas
    corecore