76,770 research outputs found

    Parameter Compilation

    Get PDF
    In resolving instances of a computational problem, if multiple instances of interest share a feature in common, it may be fruitful to compile this feature into a format that allows for more efficient resolution, even if the compilation is relatively expensive. In this article, we introduce a formal framework for classifying problems according to their compilability. The basic object in our framework is that of a parameterized problem, which here is a language along with a parameterization---a map which provides, for each instance, a so-called parameter on which compilation may be performed. Our framework is positioned within the paradigm of parameterized complexity, and our notions are relatable to established concepts in the theory of parameterized complexity. Indeed, we view our framework as playing a unifying role, integrating together parameterized complexity and compilability theory

    A Preliminary Investigation of Satisfiability Problems Not Harder than 1-in-3-SAT

    Get PDF
    The parameterized satisfiability problem over a set of Boolean relations Gamma (SAT(Gamma)) is the problem of determining whether a conjunctive formula over Gamma has at least one model. Due to Schaefer\u27s dichotomy theorem the computational complexity of SAT(Gamma), modulo polynomial-time reductions, has been completely determined: SAT(Gamma) is always either tractable or NP-complete. More recently, the problem of studying the relationship between the complexity of the NP-complete cases of SAT(Gamma) with restricted notions of reductions has attracted attention. For example, Impagliazzo et al. studied the complexity of k-SAT and proved that the worst-case time complexity increases infinitely often for larger values of k, unless 3-SAT is solvable in subexponential time. In a similar line of research Jonsson et al. studied the complexity of SAT(Gamma) with algebraic tools borrowed from clone theory and proved that there exists an NP-complete problem SAT(R^{neq,neq,neq,01}_{1/3}) such that there cannot exist any NP-complete SAT(Gamma) problem with strictly lower worst-case time complexity: the easiest NP-complete SAT(Gamma) problem. In this paper we are interested in classifying the NP-complete SAT(Gamma) problems whose worst-case time complexity is lower than 1-in-3-SAT but higher than the easiest problem SAT(R^{neq,neq,neq,01}_{1/3}). Recently it was conjectured that there only exists three satisfiability problems of this form. We prove that this conjecture does not hold and that there is an infinite number of such SAT(Gamma) problems. In the process we determine several algebraic properties of 1-in-3-SAT and related problems, which could be of independent interest for constructing exponential-time algorithms

    A Unified Subspace Classification Framework Developed for Diagnostic System Using Microwave Signal

    Get PDF
    Subspace learning is widely used in many signal processing and statistical learning problems where the signal is assumably generated from a low dimensional space. In this paper, we present a unified classifier including several concepts from different subspace techniques, such as PCA, LRC, LDA, GLRT, etc. The objective is to project the original signal (usually of high dimension) into a smaller subspace with 1) within-class data structure preserved and 2) between-class-distance enhanced. A novel classification technique called Maximum Angle Subspace Classifier (MASC) is presented to achieve these purposes. To compensate for the computational complexity and non-convexity of MASC, an approximation is proposed as a trade-off between the classification performance and the computational issue. The approaches are applied to the problem of classifying high dimensional frequency measurements from a microwave based diagnostic system and results are compared with existing methods

    The foundations of spectral computations via the Solvability Complexity Index hierarchy: Part I

    Full text link
    The problem of computing spectra of operators is arguably one of the most investigated areas of computational mathematics. Recent progress and the current paper reveal that, unlike the finite-dimensional case, infinite-dimensional problems yield a highly intricate infinite classification theory determining which spectral problems can be solved and with which type of algorithms. Classifying spectral problems and providing optimal algorithms is uncharted territory in the foundations of computational mathematics. This paper is the first of a two-part series establishing the foundations of computational spectral theory through the Solvability Complexity Index (SCI) hierarchy and has three purposes. First, we establish answers to many longstanding open questions on the existence of algorithms. We show that for large classes of partial differential operators on unbounded domains, spectra can be computed with error control from point sampling operator coefficients. Further results include computing spectra of operators on graphs with error control, the spectral gap problem, spectral classifications, and discrete spectra, multiplicities and eigenspaces. Second, these classifications determine which types of problems can be used in computer-assisted proofs. The theory for this is virtually non-existent, and we provide some of the first results in this infinite classification theory. Third, our proofs are constructive, yielding a library of new algorithms and techniques that handle problems that before were out of reach. We show several examples on contemporary problems in the physical sciences. Our approach is closely related to Smale's program on the foundations of computational mathematics initiated in the 1980s, as many spectral problems can only be computed via several limits, a phenomenon shared with the foundations of polynomial root finding with rational maps, as proved by McMullen
    • …
    corecore