280,426 research outputs found

    Average treatment effect estimation via random recursive partitioning

    Full text link
    A new matching method is proposed for the estimation of the average treatment effect of social policy interventions (e.g., training programs or health care measures). Given an outcome variable, a treatment and a set of pre-treatment covariates, the method is based on the examination of random recursive partitions of the space of covariates using regression trees. A regression tree is grown either on the treated or on the untreated individuals {\it only} using as response variable a random permutation of the indexes 1...nn (nn being the number of units involved), while the indexes for the other group are predicted using this tree. The procedure is replicated in order to rule out the effect of specific permutations. The average treatment effect is estimated in each tree by matching treated and untreated in the same terminal nodes. The final estimator of the average treatment effect is obtained by averaging on all the trees grown. The method does not require any specific model assumption apart from the tree's complexity, which does not affect the estimator though. We show that this method is either an instrument to check whether two samples can be matched (by any method) and, when this is feasible, to obtain reliable estimates of the average treatment effect. We further propose a graphical tool to inspect the quality of the match. The method has been applied to the National Supported Work Demonstration data, previously analyzed by Lalonde (1986) and others

    Characterizing and Extending Answer Set Semantics using Possibility Theory

    Full text link
    Answer Set Programming (ASP) is a popular framework for modeling combinatorial problems. However, ASP cannot easily be used for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, where this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP, in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.Comment: 39 pages and 16 pages appendix with proofs. This article has been accepted for publication in Theory and Practice of Logic Programming, Copyright Cambridge University Pres

    Towards the implementation of a preference-and uncertain-aware solver using answer set programming

    Get PDF
    Logic programs with possibilistic ordered disjunction (or LPPODs) are a recently defined logic-programming framework based on logic programs with ordered disjunction and possibilistic logic. The framework inherits the properties of such formalisms and merging them, it supports a reasoning which is nonmonotonic, preference-and uncertain-aware. The LPPODs syntax allows to specify 1) preferences in a qualitative way, and 2) necessity values about the certainty of program clauses. As a result at semantic level, preferences and necessity values can be used to specify an order among program solutions. This class of program therefore fits well in the representation of decision problems where a best option has to be chosen taking into account both preferences and necessity measures about information. In this paper we study the computation and the complexity of the LPPODs semantics and we describe the algorithm for its implementation following on Answer Set Programming approach. We describe some decision scenarios where the solver can be used to choose the best solutions by checking whether an outcome is possibilistically preferred over another considering preferences and uncertainty at the same time.Postprint (published version

    Sequential Predictions based on Algorithmic Complexity

    Get PDF
    This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonoff's universal prior M, the latter being an excellent predictor in deterministic as well as probabilistic environments, where performance is measured in terms of convergence of posteriors or losses. Despite this closeness to M, it is difficult to assess the prediction quality of m, since little is known about the closeness of their posteriors, which are the important quantities for prediction. We show that for deterministic computable environments, the "posterior" and losses of m converge, but rapid convergence could only be shown on-sequence; the off-sequence convergence can be slow. In probabilistic environments, neither the posterior nor the losses converge, in general.Comment: 26 pages, LaTe
    • …
    corecore