29 research outputs found

    SPPL: Probabilistic Programming with Fast Exact Symbolic Inference

    Full text link
    We present the Sum-Product Probabilistic Language (SPPL), a new probabilistic programming language that automatically delivers exact solutions to a broad range of probabilistic inference queries. SPPL translates probabilistic programs into sum-product expressions, a new symbolic representation and associated semantic domain that extends standard sum-product networks to support mixed-type distributions, numeric transformations, logical formulas, and pointwise and set-valued constraints. We formalize SPPL via a novel translation strategy from probabilistic programs to sum-product expressions and give sound exact algorithms for conditioning on and computing probabilities of events. SPPL imposes a collection of restrictions on probabilistic programs to ensure they can be translated into sum-product expressions, which allow the system to leverage new techniques for improving the scalability of translation and inference by automatically exploiting probabilistic structure. We implement a prototype of SPPL with a modular architecture and evaluate it on benchmarks the system targets, showing that it obtains up to 3500x speedups over state-of-the-art symbolic systems on tasks such as verifying the fairness of decision tree classifiers, smoothing hidden Markov models, conditioning transformed random variables, and computing rare event probabilities

    Optimal Approximate Sampling from Discrete Probability Distributions

    Full text link
    This paper addresses a fundamental problem in random variate generation: given access to a random source that emits a stream of independent fair bits, what is the most accurate and entropy-efficient algorithm for sampling from a discrete probability distribution (p1,,pn)(p_1, \dots, p_n), where the probabilities of the output distribution (p^1,,p^n)(\hat{p}_1, \dots, \hat{p}_n) of the sampling algorithm must be specified using at most kk bits of precision? We present a theoretical framework for formulating this problem and provide new techniques for finding sampling algorithms that are optimal both statistically (in the sense of sampling accuracy) and information-theoretically (in the sense of entropy consumption). We leverage these results to build a system that, for a broad family of measures of statistical accuracy, delivers a sampling algorithm whose expected entropy usage is minimal among those that induce the same distribution (i.e., is "entropy-optimal") and whose output distribution (p^1,,p^n)(\hat{p}_1, \dots, \hat{p}_n) is a closest approximation to the target distribution (p1,,pn)(p_1, \dots, p_n) among all entropy-optimal sampling algorithms that operate within the specified kk-bit precision. This optimal approximate sampler is also a closer approximation than any (possibly entropy-suboptimal) sampler that consumes a bounded amount of entropy with the specified precision, a class which includes floating-point implementations of inversion sampling and related methods found in many software libraries. We evaluate the accuracy, entropy consumption, precision requirements, and wall-clock runtime of our optimal approximate sampling algorithms on a broad set of distributions, demonstrating the ways that they are superior to existing approximate samplers and establishing that they often consume significantly fewer resources than are needed by exact samplers

    Impact of Baseline Characteristics on Stroke Outcomes in Pakistan: A Longitudinal Study Using the Modified Rankin Scale

    Get PDF
    Introduction. Stroke is a leading cause of disability and mortality globally, with a significant impact on healthcare systems. Various factors, including age, gender, comorbidities, and the type of stroke, influence the burden of stroke and its outcomes. The study was conducted with an objective to determine the impact of baseline characteristics on the long-term functional outcome of stroke patients. Methods. This prospective observational study was conducted between April 6, 2022 - December 31, 2023, at a tertiary hospital. The study included patients with radiologically confirmed stroke, selected through convenience sampling. Stroke patients of any gender and all age groups, with any comorbidity, were included. The Modified Rankin Scale (mRS) assessed disability on admission and three months post-stroke. Results. Of the 213 patients, 122 (57.3%) were males and the majority, 199 (93.4%) individuals, had acute ischemic stroke. The median age of the participants was 60 years (range: 13-97 years; IQR=18 years). The mRS score on admission was poor (5.0; IQR=1.0) for patients ≥ 60 years. In 74 (34.74%) participants, the left middle cerebral artery was a frequently involved site. Age of ≥ 60 years (mRS=4.0; IQR=4.0; p=0.001) and the presence of ≥ 3 comorbidities (mRS=5.0; IQR=1.0; p=0.001) were significantly associated with poor outcomes three months post-stroke. Ordinal logistic regression revealed that a mRS score of 4 (OR=14.20; 95% CI=1.70-145.25; p=0.02) and a mRS score of 5 (OR=78.84; 95% CI=9.35-820.25; p < 0.001) on admission were associated with poor outcomes. In addition, the presence of ≥ 3 comorbidities (OR=4.59; 95% CI=14.65; p < 0.01) and increasing age (OR=1.04; 95% CI=1.01-1.07; p=0.02) were predictors of poor outcomes three months post-stroke. Conclusions. The study underscores the importance of early intervention and effective management of comorbidities to improve functional outcomes in stroke patients. It highlights the need for targeted stroke care and rehabilitation strategies

    Investigation of enhanced double weight code in point to point access networks

    Get PDF
    © 2020 Published under licence by IOP Publishing Ltd. In this paper, an investigation and evaluation to enhanced double weight (EDW) code is performed, a new technique for code structuring and building using modified arithmetical model has been given for the code in place of employing previous technique based on Trial Inspections. Innovative design has been employed for the code into P2P networks using diverse weighted EDW code to be fitting into optical CDMA relevance applications. A new developed relation for EDW code is presented, the relation is based on studying and experimenting the effect of input transmission power with code weight, and the relation developed using numerical analysis method. This relation makes the estimation for the system input power needed more efficient. The results of the code has been explained by eye diagram and parametric illustrations from the simulated results. The result shows a magnificent performance of the code during high number of users and weight. On the other hand, the relation developed for power measurement helps to prevent power loss and consumption

    Serum cortisol as a predictor of major adverse outcomes in patients with COVID-19

    Get PDF
    BackgroundSeveral biomarkers were found to predict the severity and outcome of COVID-19 infection.AimsTo determine the serum cortisol response in patients with Coronavirus Disease 2019 (COVID-19) and its correlation with disease outcomes.Methods A prospective study among confirmed COVID-19 patients aged 18 years old and above. Morning cortisol levels were measured within 24 hours of admission. Relationship between cortisol levels and outcomes (intensive care unit (ICU) admission, intubation, and death) were analysed.Results A total of 206 patients positive for COVID-19 (mean age of 53.6±15.2 years) were included in the study. Mortality was recorded in 21 (30.4 per cent) patients with cortisol levels of ≥570nmol/L, 6 (8.8 per cent) among patients with 181–569nmol/L cortisol level, and 8 (11.6 per cent) among patients with ≤180nmol/L cortisol. Patients with cortisol levels of ≥570nmol/L were more likely to be admitted to the ICU, be intubated and longer hospital stay. Serum cortisol and ferritin levels were the most significant predictors of mortality. ConclusionOn admission, the morning cortisol level was predictive of mortality, ICU admission, intubation, and length of hospital stay in patients with COVID-19 and may be listed as an independent predictor for worse outcomes of COVID-19 infection

    Burnout among surgeons before and during the SARS-CoV-2 pandemic: an international survey

    Get PDF
    Background: SARS-CoV-2 pandemic has had many significant impacts within the surgical realm, and surgeons have been obligated to reconsider almost every aspect of daily clinical practice. Methods: This is a cross-sectional study reported in compliance with the CHERRIES guidelines and conducted through an online platform from June 14th to July 15th, 2020. The primary outcome was the burden of burnout during the pandemic indicated by the validated Shirom-Melamed Burnout Measure. Results: Nine hundred fifty-four surgeons completed the survey. The median length of practice was 10 years; 78.2% included were male with a median age of 37 years old, 39.5% were consultants, 68.9% were general surgeons, and 55.7% were affiliated with an academic institution. Overall, there was a significant increase in the mean burnout score during the pandemic; longer years of practice and older age were significantly associated with less burnout. There were significant reductions in the median number of outpatient visits, operated cases, on-call hours, emergency visits, and research work, so, 48.2% of respondents felt that the training resources were insufficient. The majority (81.3%) of respondents reported that their hospitals were included in the management of COVID-19, 66.5% felt their roles had been minimized; 41% were asked to assist in non-surgical medical practices, and 37.6% of respondents were included in COVID-19 management. Conclusions: There was a significant burnout among trainees. Almost all aspects of clinical and research activities were affected with a significant reduction in the volume of research, outpatient clinic visits, surgical procedures, on-call hours, and emergency cases hindering the training. Trial registration: The study was registered on clicaltrials.gov "NCT04433286" on 16/06/2020

    Estimators of Entropy and Information via Inference in Probabilistic Models

    Full text link
    Estimating information-theoretic quantities such as entropy and mutual information is central to many problems in statistics and machine learning, but challenging in high dimensions. This paper presents estimators of entropy via inference (EEVI), which deliver upper and lower bounds on many information quantities for arbitrary variables in a probabilistic generative model. These estimators use importance sampling with proposal distribution families that include amortized variational inference and sequential Monte Carlo, which can be tailored to the target model and used to squeeze true information values with high accuracy. We present several theoretical properties of EEVI and demonstrate scalability and efficacy on two problems from the medical domain: (i) in an expert system for diagnosing liver disorders, we rank medical tests according to how informative they are about latent diseases, given a pattern of observed symptoms and patient attributes; and (ii) in a differential equation model of carbohydrate metabolism, we find optimal times to take blood glucose measurements that maximize information about a diabetic patient's insulin sensitivity, given their meal and medication schedule.Comment: 18 pages, 8 figures. Appearing in AISTATS 202
    corecore