10 research outputs found

    Evaluating betting odds and free coupons using desirability

    Get PDF
    In the UK betting market, bookmakers often offer a free coupon to new customers. These free coupons allow the customer to place extra bets, at lower risk, in combination with the usual betting odds. We are interested in whether a customer can exploit these free coupons in order to make a sure gain, and if so, how the customer can achieve this. To answer this question, we evaluate the odds and free coupons as a set of desirable gambles for the bookmaker. We show that we can use the Choquet integral to check whether this set of desirable gambles incurs sure loss for the bookmaker, and hence, results in a sure gain for the customer. In the latter case, we also show how a customer can determine the combination of bets that make the best possible gain, based on complementary slackness. As an illustration, we look at some actual betting odds in the market and find that, without free coupons, the set of desirable gambles derived from those odds avoids sure loss. However, with free coupons, we identify some combinations of bets that customers could place in order to make a guaranteed gain

    Improving and benchmarking of algorithms for Γ-maximin, Γ-maximax and interval dominance

    Get PDF
    Γ-maximin, Γ-maximax and interval dominance are familiar decision criteria for making decisions under severe uncertainty, when probability distributions can only be partially identified. One can apply these three criteria by solving sequences of linear programs. In this study, we present new algorithms for these criteria and compare their performance to existing standard algorithms. Specifically, we use efficient ways, based on previous work, to find common initial feasible points for these algorithms. Exploiting these initial feasible points, we develop early stopping criteria to determine whether gambles are either Γ-maximin, Γ-maximax and interval dominant. We observe that the primal-dual interior point method benefits considerably from these improvements. In our simulation, we find that our proposed algorithms outperform the standard algorithms when the size of the domain of lower previsions is less or equal to the sizes of decisions and outcomes. However, our proposed algorithms do not outperform the standard algorithms in the case that the size of the domain of lower previsions is much larger than the sizes of decisions and outcomes

    Machine learning for determining lateral flow device results for testing of SARS-CoV-2 infection in asymptomatic populations

    Get PDF
    Rapid antigen tests, in the form of lateral flow devices (LFD) allow testing of a large population for SARS-CoV-2. To reduce the variability seen in device interpretation, we show the design and testing of an AI algorithm based on machine learning. The machine learning (ML) algorithm is trained on a combination of artificially hybridised LFDs and LFD data linked to RT-qPCR result. Participants are recruited from assisted test sites (ATS) and health care workers undertaking self-testing and images analysed using the ML algorithm. A panel of trained clinicians are used to resolve discrepancies. In total, 115,316 images are returned. In the ATS sub study, sensitivity increased from 92.08% to 97.6% and specificity from 99.85% to 99.99%. In the self-read sub-study, sensitivity increased from 16.00% to 100%, and specificity from 99.15% to 99.40%. An ML-based classifier of LFD results outperforms human reads in asymptomatic testing sites and self-reading

    Entropias e Índices Caudais

    No full text

    Birthday Problem e Generalizações (The Birthday Problem and Generalizations)

    No full text

    Market Structure with Interacting Consumers

    Get PDF
    Economic theory has developed a typology of markets which depends upon the number of firms which are present. Much of the literature, however, is set in the context of a given market structure, with the consequences of the structure being explored. Considerably less attention is paid to the process by which any particular structure emerges. In this paper, we examine the process of how different types of market structure emerge in new product markets, and in particular on markets which are primarily web-based. A wide range of outcome is possible. But the uncertainty of outcome of the evolution of market shares in such markets is based, not on the various strategies of the firms. Instead, it is inherent in the behavioral rule of choice used by consumers. We examine the consequences, for the market structure which emerges, of a realistic behavioral rule for consumer choice in new product markets. The rule has been applied in a range of different empirical contexts. It is essentially based on the model of genetic drift pioneered by Sewall Wright in the inter-war period. We identify the parameter ranges in the model in which the Herfindahl-Hirschman Index is likely to fall within the ranges identified by the US Department of Justice: unconcentrated markets; moderately concentrated markets and highly concentrated markets

    Improved linear programming methods for checking avoiding sure loss

    Get PDF
    We review the simplex method and two interior-point methods (the affine scaling and the primal-dual) for solving linear programming problems for checking avoiding sure loss, and propose novel improvements. We exploit the structure of these problems to reduce their size. We also present an extra stopping criterion, and direct ways to calculate feasible starting points in almost all cases. For benchmarking, we present algorithms for generating random sets of desirable gambles that either avoid or do not avoid sure loss. We test our improvements on these linear programming methods by measuring the computational time on these generated sets. We assess the relative performance of the three methods as a function of the number of desirable gambles and the number of outcomes. Overall, the affine scaling and primal-dual methods benefit from the improvements, and they both outperform the simplex method in most scenarios. We conclude that the simplex method is not a good choice for checking avoiding sure loss. If problems are small, then there is no tangible difference in performance between all methods. For large problems, our improved primal-dual method performs at least three times faster than any of the other methods

    Bayesian Strategies to Assess Uncertainty in Velocity Models

    Get PDF
    Quantifying uncertainty in models derived from observed seismic data is a major issue. In this research we examine the geological structure of the sub-surface using controlled source seismology which gives the data in time and the distance between the acoustic source and the receiver. Inversion tools exist to map these data into a depth model, but a full exploration of the uncertainty of the model is rarely done because robust strategies do not exist for large non-linear complex systems. There are two principal sources of uncertainty: the first comes from the input data which is noisy and band-limited; the second is from the model parameterisation and forward algorithm which approximate the physics to make the problem tractable. To address these issues we propose a Bayesian approach using the Metropolis-Hastings algorithm

    Implementation of a Brazilian Cardioprotective Nutritional (BALANCE) Program for improvement on quality of diet and secondary prevention of cardiovascular events: A randomized, multicenter trial

    Get PDF
    Background: Appropriate dietary recommendations represent a key part of secondary prevention in cardiovascular disease (CVD). We evaluated the effectiveness of the implementation of a nutritional program on quality of diet, cardiovascular events, and death in patients with established CVD. Methods: In this open-label, multicenter trial conducted in 35 sites in Brazil, we randomly assigned (1:1) patients aged 45 years or older to receive either the BALANCE Program (experimental group) or conventional nutrition advice (control group). The BALANCE Program included a unique nutritional education strategy to implement recommendations from guidelines, adapted to the use of affordable and regional foods. Adherence to diet was evaluated by the modified Alternative Healthy Eating Index. The primary end point was a composite of all-cause mortality, cardiovascular death, cardiac arrest, myocardial infarction, stroke, myocardial revascularization, amputation, or hospitalization for unstable angina. Secondary end points included biochemical and anthropometric data, and blood pressure levels. Results: From March 5, 2013, to Abril 7, 2015, a total of 2534 eligible patients were randomly assigned to either the BALANCE Program group (n = 1,266) or the control group (n = 1,268) and were followed up for a median of 3.5 years. In total, 235 (9.3%) participants had been lost to follow-up. After 3 years of follow-up, mean modified Alternative Healthy Eating Index (scale 0-70) was only slightly higher in the BALANCE group versus the control group (26.2 ± 8.4 vs 24.7 ± 8.6, P <.01), mainly due to a 0.5-serving/d greater intake of fruits and of vegetables in the BALANCE group. Primary end point events occurred in 236 participants (18.8%) in the BALANCE group and in 207 participants (16.4%) in the control group (hazard ratio, 1.15; 95% CI 0.95-1.38; P =.15). Secondary end points did not differ between groups after follow-up. Conclusions: The BALANCE Program only slightly improved adherence to a healthy diet in patients with established CVD and had no significant effect on the incidence of cardiovascular events or death. © 2019 The Author
    corecore