100,949 research outputs found

    Deriving a Fast Inverse of the Generalized Cantor N-tupling Bijection

    Get PDF
    We attack an interesting open problem (an efficient algorithm to invert the generalized Cantor N-tupling bijection) and solve it through a sequence of equivalence preserving transformations of logic programs, that take advantage of unique strengths of this programming paradigm. An extension to set and multiset tuple encodings, as well as a simple application to a "fair-search" mechanism illustrate practical uses of our algorithms. The code in the paper (a literate Prolog program, tested with SWI-Prolog and Lean Prolog) is available at http://logic.cse.unt.edu/tarau/research/2012/pcantor.pl

    Unary Pushdown Automata and Straight-Line Programs

    Full text link
    We consider decision problems for deterministic pushdown automata over a unary alphabet (udpda, for short). Udpda are a simple computation model that accept exactly the unary regular languages, but can be exponentially more succinct than finite-state automata. We complete the complexity landscape for udpda by showing that emptiness (and thus universality) is P-hard, equivalence and compressed membership problems are P-complete, and inclusion is coNP-complete. Our upper bounds are based on a translation theorem between udpda and straight-line programs over the binary alphabet (SLPs). We show that the characteristic sequence of any udpda can be represented as a pair of SLPs---one for the prefix, one for the lasso---that have size linear in the size of the udpda and can be computed in polynomial time. Hence, decision problems on udpda are reduced to decision problems on SLPs. Conversely, any SLP can be converted in logarithmic space into a udpda, and this forms the basis for our lower bound proofs. We show coNP-hardness of the ordered matching problem for SLPs, from which we derive coNP-hardness for inclusion. In addition, we complete the complexity landscape for unary nondeterministic pushdown automata by showing that the universality problem is Π2P\Pi_2 \mathrm P-hard, using a new class of integer expressions. Our techniques have applications beyond udpda. We show that our results imply Π2P\Pi_2 \mathrm P-completeness for a natural fragment of Presburger arithmetic and coNP lower bounds for compressed matching problems with one-character wildcards

    Strong Equivalence of Qualitative Optimization Problems

    Get PDF
    We introduce the framework of qualitative optimization problems (or, simply, optimization problems) to represent preference theories. The formalism uses separate modules to describe the space of outcomes to be compared (the generator) and the preferences on outcomes (the selector). We consider two types of optimization problems. They differ in the way the generator, which we model by a propositional theory, is interpreted: by the standard propositional logic semantics, and by the equilibrium-model (answer-set) semantics. Under the latter interpretation of generators, optimization problems directly generalize answer-set optimization programs proposed previously. We study strong equivalence of optimization problems, which guarantees their interchangeability within any larger context. We characterize several versions of strong equivalence obtained by restricting the class of optimization problems that can be used as extensions and establish the complexity of associated reasoning tasks. Understanding strong equivalence is essential for modular representation of optimization problems and rewriting techniques to simplify them without changing their inherent properties

    An Algorithm for Distribution problem under Uncertainty

    Get PDF
    The distribution problem whose constsained resource vector is random variable can be solved by using an algorithm, developed by Dantzig and Madansky, called "tow-stage programs under uncertainty conditions". It's certainty equivalence problem, however, would be formulated into nonlinear programming problem and it is difficult to treat for it's nonlinearity. In this paper, we propose an algorithm to solve the distribution problem under uncertainty by considering the characteristic of the objective function of the problem. This algorithm consists of iterations of solving a simple simplex method and is more advantage than general procedure solving non-linear programming in view of both the computing time and the computer storage required

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    A Common View on Strong, Uniform, and Other Notions of Equivalence in Answer-Set Programming

    Full text link
    Logic programming under the answer-set semantics nowadays deals with numerous different notions of program equivalence. This is due to the fact that equivalence for substitution (known as strong equivalence) and ordinary equivalence are different concepts. The former holds, given programs P and Q, iff P can be faithfully replaced by Q within any context R, while the latter holds iff P and Q provide the same output, that is, they have the same answer sets. Notions in between strong and ordinary equivalence have been introduced as theoretical tools to compare incomplete programs and are defined by either restricting the syntactic structure of the considered context programs R or by bounding the set A of atoms allowed to occur in R (relativized equivalence).For the latter approach, different A yield properly different equivalence notions, in general. For the former approach, however, it turned out that any ``reasonable'' syntactic restriction to R coincides with either ordinary, strong, or uniform equivalence. In this paper, we propose a parameterization for equivalence notions which takes care of both such kinds of restrictions simultaneously by bounding, on the one hand, the atoms which are allowed to occur in the rule heads of the context and, on the other hand, the atoms which are allowed to occur in the rule bodies of the context. We introduce a general semantical characterization which includes known ones as SE-models (for strong equivalence) or UE-models (for uniform equivalence) as special cases. Moreover,we provide complexity bounds for the problem in question and sketch a possible implementation method. To appear in Theory and Practice of Logic Programming (TPLP)
    corecore