163 research outputs found

    Singapore's health-care system:key features, challenges, and shifts

    Get PDF
    Since Singapore became an independent nation in 1965, the development of its health-care system has been underpinned by an emphasis on personal responsibility for health, and active government intervention to ensure access and affordability through targeted subsidies and to reduce unnecessary costs. Singapore is achieving good health outcomes, with a total health expenditure of 4·47% of gross domestic product in 2016. However, the health-care system is contending with increased stress, as reflected in so-called pain points that have led to public concern, including shortages in acute hospital beds and intermediate and long-term care (ILTC) services, and high out-of-pocket payments. The main drivers of these challenges are the rising prevalence of non-communicable diseases and rapid population ageing, limitations in the delivery and organisation of primary care and ILTC, and financial incentives that might inadvertently impede care integration. To address these challenges, Singapore's Ministry of Health implemented a comprehensive set of reforms in 2012 under its Healthcare 2020 Masterplan. These reforms substantially increased the capacity of public hospital beds and ILTC services in the community, expanded subsidies for primary care and long-term care, and introduced a series of financing health-care reforms to strengthen financial protection and coverage. However, it became clear that these measures alone would not address the underlying drivers of system stress in the long term. Instead, the system requires, and is making, much more fundamental changes to its approach. In 2016, the Ministry of Health encapsulated the required shifts in terms of the so-called Three Beyonds—namely, beyond health care to health, beyond hospital to community, and beyond quality to value

    Cross-Calibration of Stroke Disability Measures: Bayesian Analysis of Longitudinal Ordinal Categorical Data Using Negative Dependence

    Get PDF
    It is common to assess disability of stroke patients using standardized scales, such as the Rankin Stroke Outcome Scale (RS) and the Barthel Index (BI). The Rankin Scale, which was designed for applications to stroke, is based on assessing directly the global conditions of a patient. The Barthel Index, which was designed for general applications, is based on a series of questions about the patient’s ability to carry out 10 basis activities of daily living. As both scales are commonly used, but few studies use both, translating between scales is important in gaining an overall understanding of the efficacy of alternative treatments, and in developing prognostic models that combine several data sets. The objective of our analysis is to provide a tool for translating between BI and RS. Specifically, we estimate the conditional probability distributions of each given the other. Subjects consisted of 459 individuals who sustained a stroke and who were recruited for the Kansas City Stroke Study from 1995 to 1998. Patients were assessed with BI and RS measures 1, 3 and 6 months after stroke. In addition, we included data from the Framingham study, in the form of cross-classifying patients by RS and coarsely aggregated BI. Our statistical estimation approach is motivated by several goals: (a) overcoming the difficulty presented by the fact that our two sources report data at different resolutions; (b) smoothing the empirical counts to provide estimates of probabilities in regions of the table that are sparsely population; (c) avoiding estimates that would conflict with medical knowledge about the relationship between the two measures and (d) estimating the relationship between RS and BI at three months after the stroke, while borrowing strength from measurements made at one and six months. We address these issues via a Bayesian analysis combining data augmentation and constrained semiparametric inference. Our results provide the basis for (a) comparing and integrating the results of clinical trials using different measures, and (b) integrating clinical trials results into comprehensive decision model for the assessment of long term implications and cost-effectiveness of stroke prevention and acute treatment interventions. In addition, our results indicate that the degree of agreement between the two measures is less strong than commonly reported, and emphasize the importance of trial designs that include multiple assessments of outcome

    Mapping the value for money of precision medicine: a systematic literature review and meta-analysis

    Get PDF
    ObjectiveThis study aimed to quantify heterogeneity in the value for money of precision medicine (PM) by application types across contexts and conditions and to quantify sources of heterogeneity to areas of particular promises or concerns as the field of PM moves forward.MethodsA systemic search was performed in Embase, Medline, EconLit, and CRD databases for studies published between 2011 and 2021 on cost-effectiveness analysis (CEA) of PM interventions. Based on a willingness-to-pay threshold of one-time GDP per capita of each study country, the net monetary benefit (NMB) of PM was pooled using random-effects meta-analyses. Sources of heterogeneity and study biases were examined using random-effects meta-regressions, jackknife sensitivity analysis, and the biases in economic studies checklist.ResultsAmong the 275 unique CEAs of PM, publicly sponsored studies found neither genetic testing nor gene therapy cost-effective in general, which was contradictory to studies funded by commercial entities and early stage evaluations. Evidence of PM being cost-effective was concentrated in a genetic test for screening, diagnosis, or as companion diagnostics (pooled NMBs, 48,152,48,152, 8,869, 5,693, p < 0.001), in the form of multigene panel testing (pooled NMBs = 31,026, p &lt; 0.001), which only applied to a few disease areas such as cancer and high-income countries. Incremental effectiveness was an essential value driver for varied genetic tests but not gene therapy.ConclusionPrecision medicine’s value for money across application types and contexts was difficult to conclude from published studies, which might be subject to systematic bias. The conducting and reporting of CEA of PM should be locally based and standardized for meaningful comparisons

    A new instrument for measuring anticoagulation-related quality of life: development and preliminary validation

    Get PDF
    BACKGROUND: Anticoagulation can reduce quality of life, and different models of anticoagulation management might have different impacts on satisfaction with this component of medical care. Yet, to our knowledge, there are no scales measuring quality of life and satisfaction with anticoagulation that can be generalized across different models of anticoagulation management. We describe the development and preliminary validation of such an instrument – the Duke Anticoagulation Satisfaction Scale (DASS). METHODS: The DASS is a 25-item scale addressing the (a) negative impacts of anticoagulation (limitations, hassles and burdens); and (b) positive impacts of anticoagulation (confidence, reassurance, satisfaction). Each item has 7 possible responses. The DASS was administered to 262 patients currently receiving oral anticoagulation. Scales measuring generic quality of life, satisfaction with medical care, and tendency to provide socially desirable responses were also administered. Statistical analysis included assessment of item variability, internal consistency (Cronbach's alpha), scale structure (factor analysis), and correlations between the DASS and demographic variables, clinical characteristics, and scores on the above scales. A follow-up study of 105 additional patients assessed test-retest reliability. RESULTS: 220 subjects answered all items. Ceiling and floor effects were modest, and 25 of the 27 proposed items grouped into 2 factors (positive impacts, negative impacts, this latter factor being potentially subdivided into limitations versus hassles and burdens). Each factor had a high degree of internal consistency (Cronbach's alpha 0.78–0.91). The limitations and hassles factors consistently correlated with the SF-36 scales measuring generic quality of life, while the positive psychological impact scale correlated with age and time on anticoagulation. The intra-class correlation coefficient for test-retest reliability was 0.80. CONCLUSIONS: The DASS has demonstrated reasonable psychometric properties to date. Further validation is ongoing. To the degree that dissatisfaction with anticoagulation leads to decreased adherence, poorer INR control, and poor clinical outcomes, the DASS has the potential to help identify reasons for dissatisfaction (and positive satisfaction), and thus help to develop interventions to break this cycle. As an instrument designed to be applicable across multiple models of anticoagulation management, the DASS could be crucial in the scientific comparison between those models of care

    Comparing health workforce forecasting approaches for healthcare planning: The case for ophthalmologists

    Get PDF
    Health workforce planning is essential in the provision of quality healthcare. Several approaches to planning are customarily used and advocated, each with unique underlying assumptions. Thus, a thorough understanding of each assumption is required in order to make an informed decision on the choice of forecasting approach to be used. For illustration, we compare results for eye care requirements in Singapore using three established workforce forecasting approaches – workforce-to-population-ratio, needs based approach, utilization based approach – and a proposed robust integrated approach to discuss the appropriateness of each approach under various scenarios. Four simulation models using the systems modeling methodology of system dynamics were developed for use in each approach. These models were initialized and simulated using the example of eye care workforce planning in Singapore, to project the number of ophthalmologists required up to the year 2040 under the four different approaches. We found that each approach projects a different number of ophthalmologists required over time. The needs based approach tends to project the largest number of required ophthalmologists, followed by integrated, utilization based and workforce-to-population ratio approaches in descending order. The four different approaches vary widely in their forecasted workforce requirements and reinforce the need to be discerning of the fundamental differences of each approach in order to choose the most appropriate one. Further, health workforce planning should also be approached in a comprehensive and integrated manner that accounts for developments in demographic and healthcare systems

    Low-density lipoprotein cholesterol was inversely associated with 3-year all-cause mortality among Chinese oldest old: Data from the Chinese Longitudinal Healthy Longevity Survey

    Get PDF
    Objective: Low-density lipoprotein cholesterol (LDL-C) is a risk factor for survival in middle-aged individuals, but conflicting evidence exists on the relationship between LDL-C and all-cause mortality among the elderly. The goal of this study was to assess the relationship between LDL-C and all-cause mortality among Chinese oldest old (aged 80 and older) in a prospective cohort study. Methods: LDL-C concentration was measured at baseline and all-cause mortality was calculated over a 3-year period. Multiple statistical models were used to adjust for demographic and biological covariates. Results: During three years of follow-up, 447 of 935 participants died, and the overall all-cause mortality was 49.8%. Each 1 mmol/L increase of LDL-C concentration corresponded to a 19% decrease in 3-year all-cause mortality (hazard ratio [HR] 0.81, 95% confidence interval [CI] 0.71–0.92). The crude HR for abnormally higher LDL-C concentration (≥3.37 mmol/L) was 0.65 (0.41–1.03); and the adjusted HR was statistically significant around 0.60 (0.37–0.95) when adjusted for different sets of confounding factors. Results of sensitivity analysis also showed a significant association between higher LDL-C and lower mortality risk. Conclusions: Among the Chinese oldest old, higher LDL-C level was associated with lower risk of all-cause mortality. Our findings suggested the necessity of re-evaluating the optimal level of LDL-C among the oldest old

    Projecting the effects of long-term care policy on the labor market participation of primary informal family caregivers of elderly with disability: insights from a dynamic simulation model

    Get PDF
    This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.Background Using Singapore as a case study, this paper aims to understand the effects of the current long-term care policy and various alternative policy options on the labor market participation of primary informal family caregivers of elderly with disability. Methods A model of the long-term care system in Singapore was developed using System Dynamics methodology. Results Under the current long-term care policy, by 2030, 6.9 percent of primary informal family caregivers (0.34 percent of the domestic labor supply) are expected to withdraw from the labor market. Alternative policy options reduce primary informal family caregiver labor market withdrawal; however, the number of workers required to scale up long-term care services is greater than the number of caregivers who can be expected to return to the labor market. Conclusions Policymakers may face a dilemma between admitting more foreign workers to provide long-term care services and depending on primary informal family caregivers

    Sample size calculations for the design of cluster randomized trials: A summary of methodology.

    Get PDF
    Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included
    corecore