706 research outputs found

    High precision Monte Carlo study of the 3D XY-universality class

    Full text link
    We present a Monte Carlo study of the two-component ϕ4\phi^4 model on the simple cubic lattice in three dimensions. By suitable tuning of the coupling constant λ\lambda we eliminate leading order corrections to scaling. High statistics simulations using finite size scaling techniques yield ν=0.6723(3)[8]\nu=0.6723(3)[8] and η=0.0381(2)[2]\eta=0.0381(2)[2], where the statistical and systematical errors are given in the first and second bracket, respectively. These results are more precise than any previous theoretical estimate of the critical exponents for the 3D XY universality class.Comment: 13 page

    On The Power of Tree Projections: Structural Tractability of Enumerating CSP Solutions

    Full text link
    The problem of deciding whether CSP instances admit solutions has been deeply studied in the literature, and several structural tractability results have been derived so far. However, constraint satisfaction comes in practice as a computation problem where the focus is either on finding one solution, or on enumerating all solutions, possibly projected to some given set of output variables. The paper investigates the structural tractability of the problem of enumerating (possibly projected) solutions, where tractability means here computable with polynomial delay (WPD), since in general exponentially many solutions may be computed. A general framework based on the notion of tree projection of hypergraphs is considered, which generalizes all known decomposition methods. Tractability results have been obtained both for classes of structures where output variables are part of their specification, and for classes of structures where computability WPD must be ensured for any possible set of output variables. These results are shown to be tight, by exhibiting dichotomies for classes of structures having bounded arity and where the tree decomposition method is considered

    The Complexity of Reasoning for Fragments of Default Logic

    Get PDF
    Default logic was introduced by Reiter in 1980. In 1992, Gottlob classified the complexity of the extension existence problem for propositional default logic as \SigmaPtwo-complete, and the complexity of the credulous and skeptical reasoning problem as SigmaP2-complete, resp. PiP2-complete. Additionally, he investigated restrictions on the default rules, i.e., semi-normal default rules. Selman made in 1992 a similar approach with disjunction-free and unary default rules. In this paper we systematically restrict the set of allowed propositional connectives. We give a complete complexity classification for all sets of Boolean functions in the meaning of Post's lattice for all three common decision problems for propositional default logic. We show that the complexity is a hexachotomy (SigmaP2-, DeltaP2-, NP-, P-, NL-complete, trivial) for the extension existence problem, while for the credulous and skeptical reasoning problem we obtain similar classifications without trivial cases.Comment: Corrected versio

    The XY Model and the Three-state Antiferromagnetic Potts model in Three Dimensions: Critical Properties from Fluctuating Boundary Conditions

    Full text link
    We present the results of a Monte Carlo study of the three-dimensional XY model and the three-dimensional antiferromagnetic three-state Potts model. In both cases we compute the difference in the free energies of a system with periodic and a system with antiperiodic boundary conditions in the neighbourhood of the critical coupling. From the finite-size scaling behaviour of this quantity we extract values for the critical temperature and the critical exponent nu that are compatible with recent high statistics Monte Carlo studies of the models. The results for the free energy difference at the critical temperature and for the exponent nu confirm that both models belong to the same universality class.Comment: 13 pages, latex-file+2 ps-files KL-TH-94/8 and CERN-TH.7290/9

    Achieving New Upper Bounds for the Hypergraph Duality Problem through Logic

    Get PDF
    The hypergraph duality problem DUAL is defined as follows: given two simple hypergraphs G\mathcal{G} and H\mathcal{H}, decide whether H\mathcal{H} consists precisely of all minimal transversals of G\mathcal{G} (in which case we say that G\mathcal{G} is the dual of H\mathcal{H}). This problem is equivalent to deciding whether two given non-redundant monotone DNFs are dual. It is known that non-DUAL, the complementary problem to DUAL, is in GC(log2n,PTIME)\mathrm{GC}(\log^2 n,\mathrm{PTIME}), where GC(f(n),C)\mathrm{GC}(f(n),\mathcal{C}) denotes the complexity class of all problems that after a nondeterministic guess of O(f(n))O(f(n)) bits can be decided (checked) within complexity class C\mathcal{C}. It was conjectured that non-DUAL is in GC(log2n,LOGSPACE)\mathrm{GC}(\log^2 n,\mathrm{LOGSPACE}). In this paper we prove this conjecture and actually place the non-DUAL problem into the complexity class GC(log2n,TC0)\mathrm{GC}(\log^2 n,\mathrm{TC}^0) which is a subclass of GC(log2n,LOGSPACE)\mathrm{GC}(\log^2 n,\mathrm{LOGSPACE}). We here refer to the logtime-uniform version of TC0\mathrm{TC}^0, which corresponds to FO(COUNT)\mathrm{FO(COUNT)}, i.e., first order logic augmented by counting quantifiers. We achieve the latter bound in two steps. First, based on existing problem decomposition methods, we develop a new nondeterministic algorithm for non-DUAL that requires to guess O(log2n)O(\log^2 n) bits. We then proceed by a logical analysis of this algorithm, allowing us to formulate its deterministic part in FO(COUNT)\mathrm{FO(COUNT)}. From this result, by the well known inclusion TC0LOGSPACE\mathrm{TC}^0\subseteq\mathrm{LOGSPACE}, it follows that DUAL belongs also to DSPACE[log2n]\mathrm{DSPACE}[\log^2 n]. Finally, by exploiting the principles on which the proposed nondeterministic algorithm is based, we devise a deterministic algorithm that, given two hypergraphs G\mathcal{G} and H\mathcal{H}, computes in quadratic logspace a transversal of G\mathcal{G} missing in H\mathcal{H}.Comment: Restructured the presentation in order to be the extended version of a paper that will shortly appear in SIAM Journal on Computin

    Towards Efficient Reasoning under Guarded-based Disjunctive Existential Rules

    Get PDF
    International audienceThe complete picture of the complexity of answering (unions of) conjunctive queries under the main guarded-based classes of disjunc- tive existential rules has been recently settled. It has been shown that the problem is very hard, namely 2ExpTime-complete, even for fixed sets of rules expressed in lightweight formalisms. This gives rise to the question whether its complexity can be reduced by restricting the query language. Several subclasses of conjunctive queries have been proposed with the aim of reducing the complexity of classical database problems such as query evaluation and query containment. Three of the most prominent subclasses of this kind are queries of bounded hypertree-width, queries of bounded treewidth and acyclic queries. The central objective of the present paper is to understand whether the above query languages have a positive impact on the complexity of query answering under the main guarded-based classes of disjunctive existential rules. We show that (unions of) conjunctive queries of bounded hypertree- width and of bounded treewidth do not reduce the complexity of our problem, even if we focus on predicates of bounded arity, or on fixed sets of disjunctive existential rules. Regarding acyclic queries, although our problem remains 2ExpTime-complete in general, in some relevant set- tings the complexity reduces to ExpTime-complete; in fact, this requires to bound the arity of the predicates, and for some expressive guarded- based formalisms, to fix the set of rules

    Redundancy, Deduction Schemes, and Minimum-Size Bases for Association Rules

    Full text link
    Association rules are among the most widely employed data analysis methods in the field of Data Mining. An association rule is a form of partial implication between two sets of binary variables. In the most common approach, association rules are parameterized by a lower bound on their confidence, which is the empirical conditional probability of their consequent given the antecedent, and/or by some other parameter bounds such as "support" or deviation from independence. We study here notions of redundancy among association rules from a fundamental perspective. We see each transaction in a dataset as an interpretation (or model) in the propositional logic sense, and consider existing notions of redundancy, that is, of logical entailment, among association rules, of the form "any dataset in which this first rule holds must obey also that second rule, therefore the second is redundant". We discuss several existing alternative definitions of redundancy between association rules and provide new characterizations and relationships among them. We show that the main alternatives we discuss correspond actually to just two variants, which differ in the treatment of full-confidence implications. For each of these two notions of redundancy, we provide a sound and complete deduction calculus, and we show how to construct complete bases (that is, axiomatizations) of absolutely minimum size in terms of the number of rules. We explore finally an approach to redundancy with respect to several association rules, and fully characterize its simplest case of two partial premises.Comment: LMCS accepted pape

    Age-related deficits in guided search using cues.

    Full text link
    corecore