6,230 research outputs found

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio

    Computing Least Fixed Points of Probabilistic Systems of Polynomials

    Get PDF
    We study systems of equations of the form X1 = f1(X1, ..., Xn), ..., Xn = fn(X1, ..., Xn), where each fi is a polynomial with nonnegative coefficients that add up to 1. The least nonnegative solution, say mu, of such equation systems is central to problems from various areas, like physics, biology, computational linguistics and probabilistic program verification. We give a simple and strongly polynomial algorithm to decide whether mu=(1, ..., 1) holds. Furthermore, we present an algorithm that computes reliable sequences of lower and upper bounds on mu, converging linearly to mu. Our algorithm has these features despite using inexact arithmetic for efficiency. We report on experiments that show the performance of our algorithms.Comment: Published in the Proceedings of the 27th International Symposium on Theoretical Aspects of Computer Science (STACS). Technical Report is also available via arxiv.or

    Estimating a social accounting matrix using entropy difference methods:

    Get PDF
    There is a continuing need to use recent and consistent multisectoral economic data to support policy analysis and the development of economywide models. Updating and estimating input-output tables and Social Accounting Matrices (SAMs) for a recent year is a difficult and a challenging problem. Typically, input-output data are collected at long intervals (usually five years or more), while national income and product data are available annually, but with a lag. Supporting data also come from a variety of sources; e.g., censuses of manufacturing, labor surveys, agricultural data, government accounts, international trade accounts, and household surveys. The traditional RAS approach requires that we start with a consistent SAM for a particular period and “update” it for a later period given new information on row and column sums. This paper extends the RAS method by proposing a flexible entropy difference approach to estimating a consistent SAM starting from inconsistent data estimated with error, a common experience in many countries. The method is flexible and powerful when dealing with scattered and inconsistent data. It allows incorporating errors in variables, inequality constraints, and prior knowledge about any part of the SAM (not just row and column sums). Since the input-output accounts are contained within the SAM framework, updating an input-output table can be viewed as a special case of the general SAM estimation problem. The paper presents the structure of a SAM and a mathematical description of the estimation problem. It then describes the classical RAS procedure and the entropy difference approach. An example of the entropy difference approach applied to the case of Mozambique is presented. In addition, an appendix includes a listing of the computer code in the GAMS language used in the procedure.Social accounting Mozambique., Estimation theory.,

    From error bounds to the complexity of first-order descent methods for convex functions

    Get PDF
    This paper shows that error bounds can be used as effective tools for deriving complexity results for first-order descent methods in convex minimization. In a first stage, this objective led us to revisit the interplay between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can show the equivalence between the two concepts for convex functions having a moderately flat profile near the set of minimizers (as those of functions with H\"olderian growth). A counterexample shows that the equivalence is no longer true for extremely flat functions. This fact reveals the relevance of an approach based on KL inequality. In a second stage, we show how KL inequalities can in turn be employed to compute new complexity bounds for a wealth of descent methods for convex problems. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence in the spirit of the famous majorant method of Kantorovich. Our result applies to a very simple abstract scheme that covers a wide class of descent methods. As a byproduct of our study, we also provide new results for the globalization of KL inequalities in the convex framework. Our main results inaugurate a simple methodology: derive an error bound, compute the desingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Our method is illustrated through projection methods for feasibility problems, and through the famous iterative shrinkage thresholding algorithm (ISTA), for which we show that the complexity bound is of the form O(qk)O(q^{k}) where the constituents of the bound only depend on error bound constants obtained for an arbitrary least squares objective with 1\ell^1 regularization
    corecore