146,739 research outputs found

    Complexity-class-encoding sets

    Get PDF
    Properties of sets which are complex because they encode complexity classes areexplored. It is shown that not all sets with inherent complexity are of this type, although this is the only type of set for which well-developed techniques exist for proving inherent complexity.Possibilities for the complexity of encoding sets are discussed, first with referenceto an “almost everywhere” vs. “infinitely many arguments” classification, and later with reference to the density of the set of arguments on which the problem is complex.It is shown that relative complexity relationships among sets of this type are highlystructured, in contrast to the wide variation possible among arbitrary recursive sets

    Signal Set Design for Full-Diversity Low-Decoding-Complexity Differential Scaled-Unitary STBCs

    Full text link
    The problem of designing high rate, full diversity noncoherent space-time block codes (STBCs) with low encoding and decoding complexity is addressed. First, the notion of gg-group encodable and gg-group decodable linear STBCs is introduced. Then for a known class of rate-1 linear designs, an explicit construction of fully-diverse signal sets that lead to four-group encodable and four-group decodable differential scaled unitary STBCs for any power of two number of antennas is provided. Previous works on differential STBCs either sacrifice decoding complexity for higher rate or sacrifice rate for lower decoding complexity.Comment: 5 pages, 2 figures. To appear in Proceedings of IEEE ISIT 2007, Nice, Franc

    Memoization for Unary Logic Programming: Characterizing PTIME

    Full text link
    We give a characterization of deterministic polynomial time computation based on an algebraic structure called the resolution semiring, whose elements can be understood as logic programs or sets of rewriting rules over first-order terms. More precisely, we study the restriction of this framework to terms (and logic programs, rewriting rules) using only unary symbols. We prove it is complete for polynomial time computation, using an encoding of pushdown automata. We then introduce an algebraic counterpart of the memoization technique in order to show its PTIME soundness. We finally relate our approach and complexity results to complexity of logic programming. As an application of our techniques, we show a PTIME-completeness result for a class of logic programming queries which use only unary function symbols.Comment: Soumis {\`a} LICS 201

    Discriminate-and-Rectify Encoders: Learning from Image Transformation Sets

    Get PDF
    The complexity of a learning task is increased by transformations in the input space that preserve class identity. Visual object recognition for example is affected by changes in viewpoint, scale, illumination or planar transformations. While drastically altering the visual appearance, these changes are orthogonal to recognition and should not be reflected in the representation or feature encoding used for learning. We introduce a framework for weakly supervised learning of image embeddings that are robust to transformations and selective to the class distribution, using sets of transforming examples (orbit sets), deep parametrizations and a novel orbit-based loss. The proposed loss combines a discriminative, contrastive part for orbits with a reconstruction error that learns to rectify orbit transformations. The learned embeddings are evaluated in distance metric-based tasks, such as one-shot classification under geometric transformations, as well as face verification and retrieval under more realistic visual variability. Our results suggest that orbit sets, suitably computed or observed, can be used for efficient, weakly-supervised learning of semantically relevant image embeddings.This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216

    On the Parameterized Intractability of Monadic Second-Order Logic

    Full text link
    One of Courcelle's celebrated results states that if C is a class of graphs of bounded tree-width, then model-checking for monadic second order logic (MSO_2) is fixed-parameter tractable (fpt) on C by linear time parameterized algorithms, where the parameter is the tree-width plus the size of the formula. An immediate question is whether this is best possible or whether the result can be extended to classes of unbounded tree-width. In this paper we show that in terms of tree-width, the theorem cannot be extended much further. More specifically, we show that if C is a class of graphs which is closed under colourings and satisfies certain constructibility conditions and is such that the tree-width of C is not bounded by \log^{84} n then MSO_2-model checking is not fpt unless SAT can be solved in sub-exponential time. If the tree-width of C is not poly-logarithmically bounded, then MSO_2-model checking is not fpt unless all problems in the polynomial-time hierarchy can be solved in sub-exponential time

    Interpretable multiclass classification by MDL-based rule lists

    Get PDF
    Interpretable classifiers have recently witnessed an increase in attention from the data mining community because they are inherently easier to understand and explain than their more complex counterparts. Examples of interpretable classification models include decision trees, rule sets, and rule lists. Learning such models often involves optimizing hyperparameters, which typically requires substantial amounts of data and may result in relatively large models. In this paper, we consider the problem of learning compact yet accurate probabilistic rule lists for multiclass classification. Specifically, we propose a novel formalization based on probabilistic rule lists and the minimum description length (MDL) principle. This results in virtually parameter-free model selection that naturally allows to trade-off model complexity with goodness of fit, by which overfitting and the need for hyperparameter tuning are effectively avoided. Finally, we introduce the Classy algorithm, which greedily finds rule lists according to the proposed criterion. We empirically demonstrate that Classy selects small probabilistic rule lists that outperform state-of-the-art classifiers when it comes to the combination of predictive performance and interpretability. We show that Classy is insensitive to its only parameter, i.e., the candidate set, and that compression on the training set correlates with classification performance, validating our MDL-based selection criterion

    A new Lenstra-type Algorithm for Quasiconvex Polynomial Integer Minimization with Complexity 2^O(n log n)

    Full text link
    We study the integer minimization of a quasiconvex polynomial with quasiconvex polynomial constraints. We propose a new algorithm that is an improvement upon the best known algorithm due to Heinz (Journal of Complexity, 2005). This improvement is achieved by applying a new modern Lenstra-type algorithm, finding optimal ellipsoid roundings, and considering sparse encodings of polynomials. For the bounded case, our algorithm attains a time-complexity of s (r l M d)^{O(1)} 2^{2n log_2(n) + O(n)} when M is a bound on the number of monomials in each polynomial and r is the binary encoding length of a bound on the feasible region. In the general case, s l^{O(1)} d^{O(n)} 2^{2n log_2(n) +O(n)}. In each we assume d>= 2 is a bound on the total degree of the polynomials and l bounds the maximum binary encoding size of the input.Comment: 28 pages, 10 figure

    The DLV System for Knowledge Representation and Reasoning

    Full text link
    This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to Δ3P\Delta^P_3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof. Furthermore, we illustrate the general architecture of the DLV system which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.Comment: 56 pages, 9 figures, 6 table
    corecore