649 research outputs found

    On the Complexity of Optimization Problems based on Compiled NNF Representations

    Full text link
    Optimization is a key task in a number of applications. When the set of feasible solutions under consideration is of combinatorial nature and described in an implicit way as a set of constraints, optimization is typically NP-hard. Fortunately, in many problems, the set of feasible solutions does not often change and is independent from the user's request. In such cases, compiling the set of constraints describing the set of feasible solutions during an off-line phase makes sense, if this compilation step renders computationally easier the generation of a non-dominated, yet feasible solution matching the user's requirements and preferences (which are only known at the on-line step). In this article, we focus on propositional constraints. The subsets L of the NNF language analyzed in Darwiche and Marquis' knowledge compilation map are considered. A number of families F of representations of objective functions over propositional variables, including linear pseudo-Boolean functions and more sophisticated ones, are considered. For each language L and each family F, the complexity of generating an optimal solution when the constraints are compiled into L and optimality is to be considered w.r.t. a function from F is identified

    Big Graph Analyses: From Queries to Dependencies and Association Rules

    Get PDF

    Probabilistic XML: Models and Complexity

    Full text link

    Recognizing Determinism in Prioritized Repairing of Inconsistent Databases

    Get PDF
    Abstract. A repair of an inconsistent database is traditionally defined as a consistent database that differs from the inconsistent one in a "minimal way." As there are often reasons to prefer one repair over another, researchers have introduced and investigated the framework of preferred repairs, where a priority relation between facts is lifted towards a priority relation between consistent databases, and repairs are restricted to ones that are optimal in the lifted sense. In this paper we describe our recent results on the complexity of deciding whether the priority relation suffices to clean the database unambiguously, or in other words, whether there is exactly one optimal repair. In particular, we show that different conventional semantics of priority lifting entail highly different complexities

    Tractable Orders for Direct Access to Ranked Answers of Conjunctive Queries

    Full text link
    We study the question of when we can provide logarithmic-time direct access to the k-th answer to a Conjunctive Query (CQ) with a specified ordering over the answers, following a preprocessing step that constructs a data structure in time quasilinear in the size of the database. Specifically, we embark on the challenge of identifying the tractable answer orderings that allow for ranked direct access with such complexity guarantees. We begin with lexicographic orderings and give a decidable characterization (under conventional complexity assumptions) of the class of tractable lexicographic orderings for every CQ without self-joins. We then continue to the more general orderings by the sum of attribute weights and show for it that ranked direct access is tractable only in trivial cases. Hence, to better understand the computational challenge at hand, we consider the more modest task of providing access to only a single answer (i.e., finding the answer at a given position) - a task that we refer to as the selection problem. We indeed achieve a quasilinear-time algorithm for a subset of the class of full CQs without self-joins, by adopting a solution of Frederickson and Johnson to the classic problem of selection over sorted matrices. We further prove that none of the other queries in this class admit such an algorithm.Comment: 17 page

    The Shapley Value of Inconsistency Measures for Functional Dependencies

    Get PDF
    Quantifying the inconsistency of a database is motivated by various goals including reliability estimation for new datasets and progress indication in data cleaning. Another goal is to attribute to individual tuples a level of responsibility to the overall inconsistency, and thereby prioritize tuples in the explanation or inspection of dirt. Therefore, inconsistency quantification and attribution have been a subject of much research in Knowledge Representation and, more recently, in Databases. As in many other fields, a conventional responsibility sharing mechanism is the Shapley value from cooperative game theory. In this paper, we carry out a systematic investigation of the complexity of the Shapley value in common inconsistency measures for functional-dependency (FD) violations. For several measures we establish a full classification of the FD sets into tractable and intractable classes with respect to Shapley-value computation. We also study the complexity of approximation in intractable cases
    • 

    corecore