40 research outputs found

    Bayesian Methods applied to Reflection Seismology

    Get PDF
    Quantifying uncertainty in models derived from observed seismic data is an important issue in exploration geophysics. In this research we examine the geological structure of the subsurface of the Earth using controlled source seismology which consists of data recorded in time and the distance between acoustic sources and receivers. There are a number of inversion tools to map data into depth models, but a full exploration of the uncertainty of such models is rarely done because of the lack of robust strategies available for the analysis of large non-linear complex systems. In reflection seismology, there are three principal sources of uncertainty: the first comes from the input data which is noisy and band-limited, the second is from the modeling assumptions used to approximate the physics of the problem in order to make the problem tractable, and the last is from the ambiguity in data and model selection. The latter is by far the hardest source of uncertainty to assess, not only are there a large number of models which are appropriate for a given seismic profile and still physically and geologically plausible, but also the judgement related to the acceptability of a model varies according to the expert handling the data. The fact that there are many possible solutions, depending on how the problem is treated, adds a new layer of uncertainty to the question. Here we propose a Bayesian approach to assess the uncertainty in velocity models derived from seismic reflection data. We have developed a method used to identify and track seismic events called the Seismic Event Tracking algorithm. We then created the BRAINS (Bayesian Regression Analysis in Seismology) class of models used to estimate velocities, travel times and depths with associated measures of uncertainty for each identified horizon. Since the experts' prior judgements and problem requirements vary according to the situation being analysed, the Bayesian methodology is the most appropriate to create a gray box that accepts the input of prior knowledge but that is also able to cope with vague or no prior information; here each model in the BRAINS class can be used at different stages of seismic processing, depending on the inputs necessary for the next step of modeling. Moreover, each estimate produced has an uncertainty model attached that can be explored before making a decision. In order to investigate the robustness of the models proposed, we analysed a series of single and multigathered synthetic examples, some of which had attributes that differ from the modeling assumptions or carried ambiguities derived from the limitations of data recording. Finally, we analysed a 2D real data set part of a seismic survey acquired over the Naturaliste Plateau and Mentelle Basins off the south west coast of Australia. We show the efficiency of the BRAINS approach on real data and recover velocity and depth models with posterior depth standard errors of at most 0.4% relative to posterior depth means, and posterior RMS velocity standard errors of at most 1.7% relative of posterior RMS velocity means. We also observe that variations in interval velocities is higher with an average of 2.4% for the posterior interval velocity standard deviation and mean ratio which reaches a maximum of 23.7% in areas of high uncertainty

    Evaluating betting odds and free coupons using desirability

    Get PDF
    In the UK betting market, bookmakers often offer a free coupon to new customers. These free coupons allow the customer to place extra bets, at lower risk, in combination with the usual betting odds. We are interested in whether a customer can exploit these free coupons in order to make a sure gain, and if so, how the customer can achieve this. To answer this question, we evaluate the odds and free coupons as a set of desirable gambles for the bookmaker. We show that we can use the Choquet integral to check whether this set of desirable gambles incurs sure loss for the bookmaker, and hence, results in a sure gain for the customer. In the latter case, we also show how a customer can determine the combination of bets that make the best possible gain, based on complementary slackness. As an illustration, we look at some actual betting odds in the market and find that, without free coupons, the set of desirable gambles derived from those odds avoids sure loss. However, with free coupons, we identify some combinations of bets that customers could place in order to make a guaranteed gain

    Improving and benchmarking of algorithms for Γ-maximin, Γ-maximax and interval dominance

    Get PDF
    Γ-maximin, Γ-maximax and interval dominance are familiar decision criteria for making decisions under severe uncertainty, when probability distributions can only be partially identified. One can apply these three criteria by solving sequences of linear programs. In this study, we present new algorithms for these criteria and compare their performance to existing standard algorithms. Specifically, we use efficient ways, based on previous work, to find common initial feasible points for these algorithms. Exploiting these initial feasible points, we develop early stopping criteria to determine whether gambles are either Γ-maximin, Γ-maximax and interval dominant. We observe that the primal-dual interior point method benefits considerably from these improvements. In our simulation, we find that our proposed algorithms outperform the standard algorithms when the size of the domain of lower previsions is less or equal to the sizes of decisions and outcomes. However, our proposed algorithms do not outperform the standard algorithms in the case that the size of the domain of lower previsions is much larger than the sizes of decisions and outcomes

    PATTERNS OF SOCIAL CARE USE WITHIN THE OLDER POPULATION: WHAT CAN WE LEARN FROM ROUTINELY COLLECTED DATA?

    Get PDF
    Research with routinely collected social care data has untapped potential to inform new care delivery approaches and techniques. To identify opportunities for service improvement and enhance our understanding of care pathways experienced by the older population, we collaborated with a local authority in the North East of England. We set out to characterise the use of social care services and associated outcomes within the local older population (aged 65+). 171,386 records were extracted from the local authority’s social care case management system, relating to 38,191 unique individuals across the last 40 years. We identified the care packages provided to the local population, including care provided in care homes (with and without nursing), private households and assisted living facilities. The study population varied in terms of the number of care packages provided to each individual (median 7 packages, IQR 4-11) and the average duration of individual care packages (median 41 days, IQR 14 - 274 days). The care pathways that are most common amongst the older population will be described, including sequencing and outcomes, and grouped by the reason for providing care (e.g., respite, long-term care) and the reason why each care package ended (e.g., death, returning home). The wide range of care pathways experienced demonstrate the heterogeneity in needs and preferences within the older population. This dataset and analyses are an invaluable way of identifying areas of potential unmet need and evaluating the effectiveness of short-term care services

    Variations in older people's emergency care use by social care setting: a systematic review of international evidence.

    Get PDF
    Older adults' use of social care and their healthcare utilization are closely related. Residents of care homes access emergency care more often than the wider older population; however, less is known about emergency care use across other social care settings. A systematic review was conducted, searching six electronic databases between January 2012 and February 2022. Older people access emergency care from a variety of community settings. Differences in study design contributed to high variation observed between studies. Although data were limited, findings suggest that emergency hospital attendance is lowest from nursing homes and highest from assisted living facilities, whilst emergency admissions varied little by social care setting. There is a paucity of published research on emergency hospital use from social care settings, particularly home care and assisted living facilities. More attention is needed on this area, with standardized definitions to enable comparisons between studies. [Abstract copyright: © The Author(s) 2023. Published by Oxford University Press.

    Efficient algorithms for checking avoiding sure loss.

    Get PDF
    Sets of desirable gambles provide a general representation of uncertainty which can handle partial information in a more robust way than precise probabilities. Here we study the effectiveness of linear programming algorithms for determining whether or not a given set of desirable gambles avoids sure loss (i.e. is consistent). We also suggest improvements to these algorithms specifically for checking avoiding sure loss. By exploiting the structure of the problem, (i) we slightly reduce its dimension, (ii) we propose an extra stopping criterion based on its degenerate structure, and (iii) we show that one can directly calculate feasible starting points in various cases, therefore reducing the effort required in the presolve phase of some of these algorithms. To assess our results, we compare the impact of these improvements on the simplex method and two interior point methods (affine scaling and primal-dual) on randomly generated sets of desirable gambles that either avoid or do not avoid sure loss. We find that the simplex method is outperformed by the primal-dual and affine scaling methods, except for very small problems. We also find that using our starting feasible point and extra stopping criterion considerably improves the performance of the primal-dual and affine scaling methods

    A Novel Patient-Specific Model for Predicting Severe Oliguria; Development and Comparison With Kidney Disease: Improving Global Outcomes Acute Kidney Injury Classification

    Get PDF
    Objectives: The Kidney Disease: Improving Global Outcomes urine output criteria for acute kidney injury lack specificity for identifying patients at risk of adverse renal outcomes. The objective was to develop a model that analyses hourly urine output values in real time to identify those at risk of developing severe oliguria. Design: This was a retrospective cohort study utilizing prospectively collected data. Setting: A cardiac ICU in the United Kingdom. Patients: Patients undergoing cardiac surgery between January 2013 and November 2017. Interventions: None. Measurement and Main Results: Patients were randomly assigned to development (n = 981) and validation (n = 2,389) datasets. A patient-specific, dynamic Bayesian model was developed to predict future urine output on an hourly basis. Model discrimination and calibration for predicting severe oliguria ( 0.8) were identified and their outcomes were compared with those for low-risk patients and for patients who met the Kidney Disease: Improving Global Outcomes urine output criterion for acute kidney injury. Model discrimination was excellent at all time points (area under the curve > 0.9 for all). Calibration of the model’s predictions was also excellent. After adjustment using multivariable logistic regression, patients in the high-risk group were more likely to require renal replacement therapy (odds ratio, 10.4; 95% CI, 5.9–18.1), suffer prolonged hospital stay (odds ratio, 4.4; 95% CI, 3.0–6.4), and die in hospital (odds ratio, 6.4; 95% CI, 2.8–14.0) (p < 0.001 for all). Outcomes for those identified as high risk by the model were significantly worse than for patients who met the Kidney Disease: Improving Global Outcomes urine output criterion. Conclusions: This novel, patient-specific model identifies patients at increased risk of severe oliguria. Classification according to model predictions outperformed the Kidney Disease: Improving Global Outcomes urine output criterion. As the new model identifies patients at risk before severe oliguria develops it could potentially facilitate intervention to improve patient outcomes

    The impact of digital technology in care homes on unplanned secondary care usage and associated costs.

    Get PDF
    BackgroundA substantial number of Emergency Department (ED) attendances by care home residents are potentially avoidable. Health Call Digital Care Homes is an app-based technology that aims to streamline residents’ care by recording their observations such as vital parameters electronically. Observations are triaged by remote clinical staff. This study assessed the effectiveness of the Health Call technology to reduce unplanned secondary care usage and associated costs.MethodsA retrospective analysis of health outcomes and economic impact based on an intervention. The study involved 118 care homes across the North East of UK from 2018 to 2021. Routinely collected NHS secondary care data from County Durham and Darlington NHS Foundation Trust was linked with data from the Health Call app. Three outcomes were modelled monthly using Generalised Linear Mixed Models: counts of emergency attendances, emergency admissions and length of stay of emergency admissions. A similar approach was taken for costs. The impact of Health Call was tested on each outcome using the models.FindingsData from 8,702 residents were used in the analysis. Results show Health Call reduces the number of emergency attendances by 11% [6–15%], emergency admissions by 25% [20–39%] and length of stay by 11% [3–18%] (with an additional month-by-month decrease of 28% [24–34%]). The cost analysis found a cost reduction of £57 per resident in 2018, increasing to £113 in 2021.InterpretationThe introduction of a digital technology, such as Health Call, could significantly reduce contacts with and costs resulting from unplanned secondary care usage by care home residents

    Dynamic clinical prediction models for cardiac surgery

    Get PDF
    OBJECTIVES Over its lifespan, EuroSCORE became systematically miscalibrated due to a continuous fall in observed mortality despite patients becoming relatively more high-risk. We aimed to explore some potential frameworks for fitting prediction models for in-hospital mortality following cardiac surgery that dynamically adjust for case-mix in a heterogeneous patient population, and compare these to the standard application of static prediction models. METHODS Data from the Society for Cardiothoracic Surgery in Great Britain and Ireland database were analyzed for procedures performed at all NHS and some private hospitals in England and Wales between April-2001 to March-2011. The study outcome was all-cause in-hospital mortality. Four cross-sectional multiple logistic regression models were fit ranging from static to dynamic generalized linear modelling. Covariate adjustment was made using risk factors included in the logistic EuroSCORE prediction model. RESULTS The association between in-hospital mortality and the risk factors varied with time. Notably, the intercept coefficient has been steadily decreasing over the study period consistent with decreasing observed mortality. Some risk factors such as extracardiac arteriopathy and chronic pulmonary disease has been relatively stable overtime, whilst females have been associated with higher risk relative to the static model. CONCLUSIONS It is known that prediction models can lose calibration. Periodic model updating is necessary but may be better implemented using a less arbitrary modelling approach such as dynamical modelling

    Machine learning for determining lateral flow device results for testing of SARS-CoV-2 infection in asymptomatic populations

    Get PDF
    Rapid antigen tests, in the form of lateral flow devices (LFD) allow testing of a large population for SARS-CoV-2. To reduce the variability seen in device interpretation, we show the design and testing of an AI algorithm based on machine learning. The machine learning (ML) algorithm is trained on a combination of artificially hybridised LFDs and LFD data linked to RT-qPCR result. Participants are recruited from assisted test sites (ATS) and health care workers undertaking self-testing and images analysed using the ML algorithm. A panel of trained clinicians are used to resolve discrepancies. In total, 115,316 images are returned. In the ATS sub study, sensitivity increased from 92.08% to 97.6% and specificity from 99.85% to 99.99%. In the self-read sub-study, sensitivity increased from 16.00% to 100%, and specificity from 99.15% to 99.40%. An ML-based classifier of LFD results outperforms human reads in asymptomatic testing sites and self-reading
    corecore