11 research outputs found

    The CONEstrip algorithm

    Get PDF
    Uncertainty models such as sets of desirable gambles and (conditional) lower previsions can be represented as convex cones. Checking the consistency of and drawing inferences from such models requires solving feasibility and optimization problems. We consider finitely generated such models. For closed cones, we can use linear programming; for conditional lower prevision-based cones, there is an efficient algorithm using an iteration of linear programs. We present an efficient algorithm for general cones that also uses an iteration of linear programs

    A Propositional CONEstrip Algorithm

    Get PDF
    We present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations of propositional sentences. The variant differs from the original algorithm in that we apply row generation techniques. The generator problem is WPMaxSAT, an optimization variant of SAT; both can be solved with specialized solvers or integer linear programming techniques. We additionally show how optimization problems over the cone can be solved by using our propositional CONEstrip algorithm as a preprocessor. The algorithm is designed to support consistency and inference computations within the theory of sets of desirable gambles. We also make a link to similar computations in probabilistic logic, conditional probability assessments, and imprecise probability theory

    Efficient algorithms for checking avoiding sure loss.

    Get PDF
    Sets of desirable gambles provide a general representation of uncertainty which can handle partial information in a more robust way than precise probabilities. Here we study the effectiveness of linear programming algorithms for determining whether or not a given set of desirable gambles avoids sure loss (i.e. is consistent). We also suggest improvements to these algorithms specifically for checking avoiding sure loss. By exploiting the structure of the problem, (i) we slightly reduce its dimension, (ii) we propose an extra stopping criterion based on its degenerate structure, and (iii) we show that one can directly calculate feasible starting points in various cases, therefore reducing the effort required in the presolve phase of some of these algorithms. To assess our results, we compare the impact of these improvements on the simplex method and two interior point methods (affine scaling and primal-dual) on randomly generated sets of desirable gambles that either avoid or do not avoid sure loss. We find that the simplex method is outperformed by the primal-dual and affine scaling methods, except for very small problems. We also find that using our starting feasible point and extra stopping criterion considerably improves the performance of the primal-dual and affine scaling methods

    Accept & Reject Statement-Based Uncertainty Models

    Get PDF
    We develop a framework for modelling and reasoning with uncertainty based on accept and reject statements about gambles. It generalises the frameworks found in the literature based on statements of acceptability, desirability, or favourability and clarifies their relative position. Next to the statement-based formulation, we also provide a translation in terms of preference relations, discuss---as a bridge to existing frameworks---a number of simplified variants, and show the relationship with prevision-based uncertainty models. We furthermore provide an application to modelling symmetry judgements.Comment: 35 pages, 17 figure

    Linear programming algorithms for lower previsions

    Get PDF
    The thesis begins with a brief summary of linear programming, three methods for solving linear programs (the simplex, the affine scaling and the primal-dual methods) and a brief review of desirability and lower previsions. The first contribution is to improve these algorithms for efficiently solving these linear programming problems for checking avoiding sure loss. To exploit these linear programs, I can reduce their size and propose novel improvements, namely, extra stopping criteria and direct ways to calculate feasible starting points in almost all cases. To benchmark the improvements, I present algorithms for generating random sets of desirable gambles that either avoid or do not avoid sure loss. Overall, the affine scaling and primal-dual methods benefit from the improvements, and they both outperform the simplex method in most scenarios. Hence, I conclude that the simplex method is not a good choice for checking avoiding sure loss. If problems are small, then there is no tangible difference in performance between all methods. For large problems, the improved primal-dual method performs at least three times faster than any of the other methods. The second contribution is to study checking avoiding sure loss for sets of desirable gambles derived from betting odds. Specifically, in the UK betting market, bookmakers usually provide odds and give a free coupon, which can be spent on betting, to customers who first bet with them. I investigate whether a customer can exploit these odds and the free coupon in order to make a sure gain, and if that is possible, how can that be achieved. To answer this question, I view these odds and the free coupon as a set of desirable gambles and present an algorithm to check whether and how such a set incurs sure loss. I show that the Choquet integral and complementary slackness can be used to answer these questions. This can inform the customers how much should be placed on each bet in order to make a sure gain. As an illustration, I show an example using actual betting odds in the market where all sets of desirable gambles derived from those odds avoid sure loss. However, with a free coupon, there are some combinations of bets that the customers could place in order to make a guaranteed gain. I also consider maximality which is a criterion for decision making under uncertainty, using lower previsions. I study two existing algorithms, one proposed by Troffaes and Hable (2014), and one by Jansen, Augustin, and Schollmeyer (2017). For the last contribution in the thesis, I present a new algorithm for finding max- imal gambles and provide a new method for generating random decision problems to benchmark these algorithms on generated sets. To find all maximal gambles, Jansen et al. solve one large linear program for each gamble, while in Troffaes and Hable, and also in our new algorithm, this can be done by solving a larger sequence of smaller linear programs. For the second case, I apply efficient ways to find a common feasible starting point for this sequence of linear programs from the first contribution. Exploiting these feasible starting points, I propose early stopping criteria for further improving efficiency for the primal-dual method. For benchmarking, we can generate sets of gambles with pre-specified ratios of maximal and interval dominant gambles. I investigate the use of interval dominance at the beginning to eliminate non-maximal gambles. I find that this can make the problem smaller and benefits Jansen et al.’s algorithm, but perhaps surprisingly, not the other two algorithms. We find that our algorithm, without using interval dominance, outperforms all other algorithms in all scenarios in our benchmarking

    Exposing some points of interest about non-exposed points of desirability

    Get PDF
    We study the representation of sets of desirable gambles by sets of probability mass functions. Sets of desirable gambles are a very general uncertainty model, that may be non-Archimedean, and therefore not representable by a set of probability mass functions. Recently, Cozman (2018) has shown that imposing the additional requirement of even convexity on sets of desirable gambles guarantees that they are representable by a set of probability mass functions. Already more that 20 years earlier, Seidenfeld et al. (1995) gave an axiomatisation of binary preferences—on horse lotteries, rather than on gambles—that also leads to a unique representation in terms of sets of probability mass functions. To reach this goal, they use two devices, which we will call ‘SSK–Archimedeanity’ and ‘SSK–extension’. In this paper, we will make the arguments of Seidenfeld et al. (1995) explicit in the language of gambles, and show how their ideas imply even convexity and allow for conservative reasoning with evenly convex sets of desirable gambles, by deriving an equivalence between the SSK–Archimedean natural extension, the SSK–extension, and the evenly convex natural extension

    On imprecision in statistical theory

    Get PDF
    This thesis provides an exploration of the interplay between imprecise probability and statistics. Mathematically, one may summarise this relationship as how (Bayesian) sensitivity analysis involving a set of (prior) models can be done in relation to the notion of coherence in the sense of de Finetti [32], Williams [84] and, more recently, Walley [81]. This thesis explores how imprecise probability can be applied to foundational statistical problems. The contributions of this thesis are three folds. In Chapter 1, we illustrate and motivate the need for imprecise models due to certain inherent limitations of elicitation of a statistical model. In Chapter 2, we provide a primer of imprecise probability aimed at the statistics audience along with illustrative statistical examples and results that highlight salient behaviours of imprecise models from the the statistical perspective. In the second part of the thesis (Chapters 3, 4, 5), we consider the statistical application of the imprecise Dirichlet model (IDM), an established model in imprecise probability. In particular, the posterior inference for log-odds statistics under sparse contingency tables, the development and use of imprecise interval estimates via quantile intervals over a set of distributions and the geometry of the optimisation problem over a set of distributions are studied. Some of these applications require extensions of Walley’s existing framework, and are presented as part of our contribution. The third part of the thesis (Chapters 6, 7) departs from the IDM parametric assumption and instead focuses on posterior inference using imprecise models in a finite dimensional setting when the lower bound of the probability of the data over a set of elicited priors is zero. This setting generalises the problem of zero marginal probability in Bayesian analysis. In Chapter 6, we explore the methodology, behaviour and interpretability of the posterior inference under two established models in imprecise probability: the vacuous and regular extensions. In Chapter 7, we note that these extensions are in fact extremes in imprecision, the variability of an inference over the elicited set of probability distributions. Then we consider extensions which are of intermediate levels of imprecision, and discuss their elicitation and assessment
    corecore