13 research outputs found

    Linear programming algorithms for lower previsions

    Get PDF
    The thesis begins with a brief summary of linear programming, three methods for solving linear programs (the simplex, the affine scaling and the primal-dual methods) and a brief review of desirability and lower previsions. The first contribution is to improve these algorithms for efficiently solving these linear programming problems for checking avoiding sure loss. To exploit these linear programs, I can reduce their size and propose novel improvements, namely, extra stopping criteria and direct ways to calculate feasible starting points in almost all cases. To benchmark the improvements, I present algorithms for generating random sets of desirable gambles that either avoid or do not avoid sure loss. Overall, the affine scaling and primal-dual methods benefit from the improvements, and they both outperform the simplex method in most scenarios. Hence, I conclude that the simplex method is not a good choice for checking avoiding sure loss. If problems are small, then there is no tangible difference in performance between all methods. For large problems, the improved primal-dual method performs at least three times faster than any of the other methods. The second contribution is to study checking avoiding sure loss for sets of desirable gambles derived from betting odds. Specifically, in the UK betting market, bookmakers usually provide odds and give a free coupon, which can be spent on betting, to customers who first bet with them. I investigate whether a customer can exploit these odds and the free coupon in order to make a sure gain, and if that is possible, how can that be achieved. To answer this question, I view these odds and the free coupon as a set of desirable gambles and present an algorithm to check whether and how such a set incurs sure loss. I show that the Choquet integral and complementary slackness can be used to answer these questions. This can inform the customers how much should be placed on each bet in order to make a sure gain. As an illustration, I show an example using actual betting odds in the market where all sets of desirable gambles derived from those odds avoid sure loss. However, with a free coupon, there are some combinations of bets that the customers could place in order to make a guaranteed gain. I also consider maximality which is a criterion for decision making under uncertainty, using lower previsions. I study two existing algorithms, one proposed by Troffaes and Hable (2014), and one by Jansen, Augustin, and Schollmeyer (2017). For the last contribution in the thesis, I present a new algorithm for finding max- imal gambles and provide a new method for generating random decision problems to benchmark these algorithms on generated sets. To find all maximal gambles, Jansen et al. solve one large linear program for each gamble, while in Troffaes and Hable, and also in our new algorithm, this can be done by solving a larger sequence of smaller linear programs. For the second case, I apply efficient ways to find a common feasible starting point for this sequence of linear programs from the first contribution. Exploiting these feasible starting points, I propose early stopping criteria for further improving efficiency for the primal-dual method. For benchmarking, we can generate sets of gambles with pre-specified ratios of maximal and interval dominant gambles. I investigate the use of interval dominance at the beginning to eliminate non-maximal gambles. I find that this can make the problem smaller and benefits Jansen et al.’s algorithm, but perhaps surprisingly, not the other two algorithms. We find that our algorithm, without using interval dominance, outperforms all other algorithms in all scenarios in our benchmarking

    Regret-based budgeted decision rules under severe uncertainty

    Get PDF
    One way to make decisions under uncertainty is to select an optimal option from a possible range of options, by maximizing the expected utilities derived from a probability model. However, under severe uncertainty, identifying precise probabilities is hard. For this reason, imprecise probability models uncertainty through convex sets of probabilities, and considers decision rules that can return multiple options to reflect insufficient information. Many well-founded decision rules have been studied in the past, but none of those standard rules are able to control the number of returned alternatives. This can be a problem for large decision problems, due to the cognitive burden decision makers have to face when presented with a large number of alternatives. Our contribution proposes regret-based ideas to construct new decision rules which return a bounded number of options, where the limit on the number of options is set in advance by the decision maker as an expression of their cognitive limitation. We also study their consistency and numerical behaviour

    Evaluating betting odds and free coupons using desirability

    Get PDF
    In the UK betting market, bookmakers often offer a free coupon to new customers. These free coupons allow the customer to place extra bets, at lower risk, in combination with the usual betting odds. We are interested in whether a customer can exploit these free coupons in order to make a sure gain, and if so, how the customer can achieve this. To answer this question, we evaluate the odds and free coupons as a set of desirable gambles for the bookmaker. We show that we can use the Choquet integral to check whether this set of desirable gambles incurs sure loss for the bookmaker, and hence, results in a sure gain for the customer. In the latter case, we also show how a customer can determine the combination of bets that make the best possible gain, based on complementary slackness. As an illustration, we look at some actual betting odds in the market and find that, without free coupons, the set of desirable gambles derived from those odds avoids sure loss. However, with free coupons, we identify some combinations of bets that customers could place in order to make a guaranteed gain

    Improving and benchmarking of algorithms for Γ-maximin, Γ-maximax and interval dominance

    Get PDF
    Γ-maximin, Γ-maximax and interval dominance are familiar decision criteria for making decisions under severe uncertainty, when probability distributions can only be partially identified. One can apply these three criteria by solving sequences of linear programs. In this study, we present new algorithms for these criteria and compare their performance to existing standard algorithms. Specifically, we use efficient ways, based on previous work, to find common initial feasible points for these algorithms. Exploiting these initial feasible points, we develop early stopping criteria to determine whether gambles are either Γ-maximin, Γ-maximax and interval dominant. We observe that the primal-dual interior point method benefits considerably from these improvements. In our simulation, we find that our proposed algorithms outperform the standard algorithms when the size of the domain of lower previsions is less or equal to the sizes of decisions and outcomes. However, our proposed algorithms do not outperform the standard algorithms in the case that the size of the domain of lower previsions is much larger than the sizes of decisions and outcomes

    A mixture Weibull-Rayleigh distribution and its application

    Get PDF
    In this paper, we introduced a mixture Weibull-Rayleigh (MWR) distribution, which was generated by the twocomponent mixture distribution, i.e., Weibull-Rayleigh and length-biased Weibull-Rayleigh distributions. We studied its properties such as the rth moment, the survival function and the sub-model of the MWR distribution. We used the maximum likelihood estimation, the maximum product of spacing estimators, the Anderson-Darling minimum distance estimators and the Cramer-von Mises minimum distance estimators to estimate the parameters of the MWR distribution. Comparing with the lognormal, Weibull-Rayleigh, length-biased Weibull-Rayleigh, mixture generalized gamma and mixture exponentiated inverted Weibull distributions, we present an application of the MWR distribution on fitting hydrological datasets. We found that the MWR distribution provided a better fitting among these distributions. Therefore, we applied the MWR distribution to predict the return periods of such data

    Efficient algorithms for checking avoiding sure loss.

    Get PDF
    Sets of desirable gambles provide a general representation of uncertainty which can handle partial information in a more robust way than precise probabilities. Here we study the effectiveness of linear programming algorithms for determining whether or not a given set of desirable gambles avoids sure loss (i.e. is consistent). We also suggest improvements to these algorithms specifically for checking avoiding sure loss. By exploiting the structure of the problem, (i) we slightly reduce its dimension, (ii) we propose an extra stopping criterion based on its degenerate structure, and (iii) we show that one can directly calculate feasible starting points in various cases, therefore reducing the effort required in the presolve phase of some of these algorithms. To assess our results, we compare the impact of these improvements on the simplex method and two interior point methods (affine scaling and primal-dual) on randomly generated sets of desirable gambles that either avoid or do not avoid sure loss. We find that the simplex method is outperformed by the primal-dual and affine scaling methods, except for very small problems. We also find that using our starting feasible point and extra stopping criterion considerably improves the performance of the primal-dual and affine scaling methods

    Decision making under severe uncertainty on a budget

    Get PDF
    Convex sets of probabilities are general models to describe and reason with uncertainty. Moreover, robust decision rules defined for them enable one to make cautious inferences by allowing sets of optimal actions to be returned, reflecting lack of information. One caveat of such rules, though, is that the number of returned actions is only bounded by the number of possibles actions, which can be huge, such as in combinatorial optimisation problems. For this reason, we propose and discuss new decision rules whose number of returned actions is bounded by a fixed value and study their consistency and numerical behaviour

    Improving and benchmarking of algorithms for decision making with lower previsions

    Get PDF
    Maximality, interval dominance, and E-admissibility are three well-known criteria for decision making under severe uncertainty using lower previsions. We present a new fast algorithm for nding maximal gambles. We compare its performance to existing algorithms, one proposed by Troaes and Hable (2014), and one by Jansen, Augustin, and Schollmeyer (2017). To do so, we develop a new method for generating random decision problems with pre-specied ratios of maximal and interval dominant gambles. Based on earlier work, we present ecient ways to nd common feasible starting points in these algorithms. We then exploit these feasible starting points to develop early stopping criteria for the primal-dual interior point method, further improving eciency. We nd that the primal-dual interior point method works best. We also investigate the use of interval dominance to eliminate non-maximal gambles. This can make the problem smaller, and we observe that this ben- ets Jansen et al.'s algorithm, but perhaps surprisingly, not the other two algorithms. We nd that our algorithm, without using interval dominance, outperforms all other algorithms in all scenarios in our benchmarking

    Improved linear programming methods for checking avoiding sure loss

    Get PDF
    We review the simplex method and two interior-point methods (the affine scaling and the primal-dual) for solving linear programming problems for checking avoiding sure loss, and propose novel improvements. We exploit the structure of these problems to reduce their size. We also present an extra stopping criterion, and direct ways to calculate feasible starting points in almost all cases. For benchmarking, we present algorithms for generating random sets of desirable gambles that either avoid or do not avoid sure loss. We test our improvements on these linear programming methods by measuring the computational time on these generated sets. We assess the relative performance of the three methods as a function of the number of desirable gambles and the number of outcomes. Overall, the affine scaling and primal-dual methods benefit from the improvements, and they both outperform the simplex method in most scenarios. We conclude that the simplex method is not a good choice for checking avoiding sure loss. If problems are small, then there is no tangible difference in performance between all methods. For large problems, our improved primal-dual method performs at least three times faster than any of the other methods

    Prediction of survival and analysis of prognostic factors for hepatocellular carcinoma: a 20-year of imaging diagnosis in Upper Northern Thailand

    No full text
    Abstract Background To evaluate survival rates of hepatocellular carcinoma (HCC), the Chiang Mai Cancer Registry provided characteristics data of 6276 HCC patients diagnosed between 1998-2020 based on evolution of imaging diagnosis. Evolution can be separated into four cohorts, namely, cohort 1 (1990-2005) when we had ultrasound (US) and single-phase computed tomography (CT), cohort 2 (2006-2009) when one multi-phase CT and one magnetic resonance imaging (MRI) were added, cohort 3 (2010-2015) when MRI with LI-RADS was added, and finally, cohort 4 (2016-2020) when two upgraded MRIs with LI-RADS were added. Methods Cox proportional hazard models were used to determine the relation between death and risk factors including methods of imagining diagnosis, gender, age of diagnosis, tumor stages, history of smoking and alcohol-use, while Kaplan-Meier curves were used to calculate survival rates. Results The median age of diagnosis was 57.0 years (IQR: 50.0-65.0) and the median survival time was 5.8 months (IQR: 1.9-26.8) during the follow-up period. In the univariable analysis, all factors were all associated with a higher risk of death in HCC patients except age of diagnosis. In a multivariable analysis, elderly age at diagnosis, regional and metastatic stages and advanced methods of imagining diagnosis during cohorts 2 and 3 were independently associated with the risk of death in HCC patients. The survival rate of patients diagnosed during cohort 4 was significantly higher than the other cohorts. Conclusion As a significantly increasing survival rate of HCC patients in cohort 4, advanced methods of diagnostic imaging can be a part of the recommendation to diagnose HCC
    corecore