466 research outputs found

    Contribution of Nepal’s Free Delivery Care Policies in Improving Utilisation of Maternal Health Services

    Get PDF
    Background: Nepal has made remarkable improvements in maternal health outcomes. The implementation of demand and supply side strategies have often been attributed with the observed increase in utilization of maternal healthcare services. In 2005, Free Delivery Care (FDC) policy was implemented under the name of Maternity Incentive Scheme (MIS), with the intention of reducing transport costs associated with giving birth in a health facility. In 2009, MIS was expanded to include free delivery services. The new expanded programme was named “Aama” programme, and further provided a cash incentive for attending four or more antenatal visits. This article analysed the influence of FDC policies, individual and community level factors in the utilisation of four antenatal care (4 ANC) visits and institutional deliveries in Nepal. Methods: Demographic and health survey data from 1996, 2001, 2006 and 2011 were used and a multi-level analysis was employed to determine the effect of FDC policy intervention, individual and community level factors in utilisation of 4 ANC visits and institutional delivery services. Results: Multivariate analysis suggests that FDC policy had the largest effect in the utilisation of 4 ANC visits and institutional delivery compared to individual and community factors. After the implementation of MIS in 2005, women were three times (adjusted odds ratio [AOR]=3.020, P<.001) more likely to attend 4 ANC visits than when there was no FDC policy. After the implementation of Aama programme in 2009, the likelihood of attending 4 ANC visits increased six-folds (AOR=6.006, P<.001) compared prior to the implementation of FDC policy. Similarly, institutional deliveries increased two times after the implementation of the MIS (AOR=2.117, P<.001) than when there was no FDC policy. The institutional deliveries increased five-folds (AOR=5.116, P<.001) after the implementation of Aama compared to no FDC policy. Conclusion: Results from this study suggest that MIS and Aama policies have had a strong positive influence on the utilisation of 4 ANC visits and institutional deliveries in Nepal. Nevertheless, results also show that FDC policies may not be sufficient in raising demand for maternal health services withoutadequately considering the individual and community level factors

    Calculating the power of a planned individual participant data meta‐analysis of randomised trials to examine a treatment‐covariate interaction with a time‐to‐event outcome

    Get PDF
    Before embarking on an individual participant data meta-analysis (IPDMA) project, researchers should consider the power of their planned IPDMA conditional on the studies promising their IPD and their characteristics. Such power estimates help inform whether the IPDMA project is worth the time and funding investment, before IPD are collected. Here, we suggest how to estimate the power of a planned IPDMA of randomised trials aiming to examine treatment-covariate interactions at the participant-level (i.e., treatment effect modifiers). We focus on a time-to-event (survival) outcome with a binary or continuous covariate, and propose an approximate analytic power calculation that conditions on the actual characteristics of trials, for example, in terms of sample sizes and covariate distributions. The proposed method has five steps: (i) extracting the following aggregate data for each group in each trial—the number of participants and events, the mean and SD for each continuous covariate, and the proportion of participants in each category for each binary covariate; (ii) specifying a minimally important interaction size; (iii) deriving an approximate estimate of Fisher's information matrix for each trial and the corresponding variance of the interaction estimate per trial, based on assuming an exponential survival distribution; (iv) deriving the estimated variance of the summary interaction estimate from the planned IPDMA, under a common-effect assumption, and (v) calculating the power of the IPDMA based on a two-sided Wald test. Stata and R code are provided and a real example provided for illustration. Further evaluation in real examples and simulations is needed

    Calculating the power of a planned individual participant data meta-analysis of randomised trials to examine a treatment-covariate interaction with a time-to-event outcome

    Get PDF
    Before embarking on an individual participant data meta-analysis (IPDMA) project, researchers should consider the power of their planned IPDMA conditional on the studies promising their IPD and their characteristics. Such power estimates help inform whether the IPDMA project is worth the time and funding investment, before IPD are collected. Here, we suggest how to estimate the power of a planned IPDMA of randomised trials aiming to examine treatment-covariate interactions at the participant-level (i.e., treatment effect modifiers). We focus on a time-to-event (survival) outcome with a binary or continuous covariate, and propose an approximate analytic power calculation that conditions on the actual characteristics of trials, for example, in terms of sample sizes and covariate distributions. The proposed method has five steps: (i) extracting the following aggregate data for each group in each trial—the number of participants and events, the mean and SD for each continuous covariate, and the proportion of participants in each category for each binary covariate; (ii) specifying a minimally important interaction size; (iii) deriving an approximate estimate of Fisher's information matrix for each trial and the corresponding variance of the interaction estimate per trial, based on assuming an exponential survival distribution; (iv) deriving the estimated variance of the summary interaction estimate from the planned IPDMA, under a common-effect assumption, and (v) calculating the power of the IPDMA based on a two-sided Wald test. Stata and R code are provided and a real example provided for illustration. Further evaluation in real examples and simulations is needed

    Calculating the power of a planned individual participant data meta‐analysis to examine prognostic factor effects for a binary outcome

    Get PDF
    Collecting data for an individual participant data meta‐analysis (IPDMA) project can be time consuming and resource intensive and could still have insufficient power to answer the question of interest. Therefore, researchers should consider the power of their planned IPDMA before collecting IPD. Here we propose a method to estimate the power of a planned IPDMA project aiming to synthesise multiple cohort studies to investigate the (unadjusted or adjusted) effects of potential prognostic factors for a binary outcome. We consider both binary and continuous factors and provide a three‐step approach to estimating the power in advance of collecting IPD, under an assumption of the true prognostic effect of each factor of interest. The first step uses routinely available (published) aggregate data for each study to approximate Fisher's information matrix and thereby estimate the anticipated variance of the unadjusted prognostic factor effect in each study. These variances are then used in step 2 to estimate the anticipated variance of the summary prognostic effect from the IPDMA. Finally, step 3 uses this variance to estimate the corresponding IPDMA power, based on a two‐sided Wald test and the assumed true effect. Extensions are provided to adjust the power calculation for the presence of additional covariates correlated with the prognostic factor of interest (by using a variance inflation factor) and to allow for between‐study heterogeneity in prognostic effects. An example is provided for illustration, and Stata code is supplied to enable researchers to implement the method

    From Rags to Riches: Assessing poverty and vulnerability in urban Nepal

    Get PDF
    Urbanisation brings with it rapid socio-economic change with volatile livelihoods and unstable ownership of assets. Yet, current measures of wealth are based predominantly on static livelihoods found in rural areas. We sought to assess the extent to which seven common measures of wealth appropriately capture vulnerability to poverty in urban areas. We then sought to develop a measure that captures the characteristics of one urban area in Nepal. We collected and analysed data from 1,180 households collected during a survey conducted between November 2017 and January 2018 and designed to be representative of the Kathmandu valley. A separate survey of a sub set of households was conducted using participatory qualitative methods in slum and non-slum neighbourhoods. A series of currently used indices of deprivation were calculated from questionnaire data. We used bivariate statistical methods to examine the association between each index and identify characteristics of poor and non-poor. Qualitative data was used to identify characteristics of poverty from the perspective of urban poor communities which were used to construct an Urban Poverty Index that combined asset and consumption focused context specific measures of poverty that could be proxied by easily measured indicators as assessed through multivariate modelling. We found a strong but not perfect association between each measure of poverty. There was disagreement when comparing the consumption and deprivation index on the classification of 19% of the sample. Choice of short-term monetary and longer-term capital approaches accounted for much of the difference. Those who reported migrating due to economic necessity were most likely to be categorised as poor. A combined index was developed to capture these dimension of poverty and understand urban vulnerability. A second version of the index was constructed that can be computed using a smaller range of variables to identify those in poverty. Current measures may hide important aspects of urban poverty. Those who migrate out of economic necessity are particularly vulnerable. A composite index of socioeconomic status helps to capture the complex nature of economic vulnerability

    The prognostic utility of tests of platelet function for the detection of 'aspirin resistance' in patients with established cardiovascular or cerebrovascular disease: a systematic review and economic evaluation.

    Get PDF
    BACKGROUND: The use of aspirin is well established for secondary prevention of cardiovascular disease. However, a proportion of patients suffer repeat cardiovascular events despite being prescribed aspirin treatment. It is uncertain whether or not this is due to an inherent inability of aspirin to sufficiently modify platelet activity. This report aims to investigate whether or not insufficient platelet function inhibition by aspirin ('aspirin resistance'), as defined using platelet function tests (PFTs), is linked to the occurrence of adverse clinical outcomes, and further, whether or not patients at risk of future adverse clinical events can be identified through PFTs. OBJECTIVES: To review systematically the clinical effectiveness and cost-effectiveness evidence regarding the association between PFT designation of 'aspirin resistance' and the risk of adverse clinical outcome(s) in patients prescribed aspirin therapy. To undertake exploratory model-based cost-effectiveness analysis on the use of PFTs. DATA SOURCES: Bibliographic databases (e.g. MEDLINE from inception and EMBASE from 1980), conference proceedings and ongoing trial registries up to April 2012. METHODS: Standard systematic review methods were used for identifying clinical and cost studies. A risk-of-bias assessment tool was adapted from checklists for prognostic and diagnostic studies. (Un)adjusted odds and hazard ratios for the association between 'aspirin resistance', for different PFTs, and clinical outcomes are presented; however, heterogeneity between studies precluded pooling of results. A speculative economic model of a PFT and change of therapy strategy was developed. RESULTS: One hundred and eight relevant studies using a variety of PFTs, 58 in patients on aspirin monotherapy, were analysed in detail. Results indicated that some PFTs may have some prognostic utility, i.e. a trend for more clinical events to be associated with groups classified as 'aspirin resistant'. Methodological and clinical heterogeneity prevented a quantitative summary of prognostic effect. Study-level effect sizes were generally small and absolute outcome risk was not substantially different between 'aspirin resistant' and 'aspirin sensitive' designations. No studies on the cost-effectiveness of PFTs for 'aspirin resistance' were identified. Based on assumptions of PFTs being able to accurately identify patients at high risk of clinical events and such patients benefiting from treatment modification, the economic model found that a test-treat strategy was likely to be cost-effective. However, neither assumption is currently evidence based. LIMITATIONS: Poor or incomplete reporting of studies suggests a potentially large volume of inaccessible data. Analyses were confined to studies on patients prescribed aspirin as sole antiplatelet therapy at the time of PFT. Clinical and methodological heterogeneity across studies precluded meta-analysis. Given the lack of robust data the economic modelling was speculative. CONCLUSIONS: Although evidence indicates that some PFTs may have some prognostic value, methodological and clinical heterogeneity between studies and different approaches to analyses create confusion and inconsistency in prognostic results, and prevented a quantitative summary of their prognostic effect. Protocol-driven and adequately powered primary studies are needed, using standardised methods of measurements to evaluate the prognostic ability of each test in the same population(s), and ideally presenting individual patient data. For any PFT to inform individual risk prediction, it will likely need to be considered in combination with other prognostic factors, within a prognostic model. STUDY REGISTRATION: This study is registered as PROSPERO 2012:CRD42012002151. FUNDING: The National Institute for Health Research Health Technology Assessment programme

    Variation in perception of environmental changes in nine Solomon Islands communities : implications for securing fairness in community-based adaptation

    Get PDF
    Community-based approaches are pursued in recognition of the need for place-based responses to environmental change that integrate local understandings of risk and vulnerability. Yet the potential for fair adaptation is intimately linked to how variations in perceptions of environmental change and risk are treated. There is, however, little empirical evidence of the extent and nature of variations in risk perception in and between multiple community settings. Here, we rely on data from 231 semi-structured interviews conducted in nine communities in Western Province, Solomon Islands, to statistically model differential perceptions of risk and change within and between communities. Overall, people were found to be less likely to perceive environmental changes in the marine environment than they were for terrestrial systems. The distance to the nearest market town (which may be a proxy for exposure to commercial logging and degree of involvement with the market economy) and gender had the greatest overall statistical effects on perceptions of risk. Yet, we also find that significant environmental change is under reported in communities, while variations in perception are not always easily related to commonly assumed fault lines of vulnerability. The findings suggest that there is an urgent need for methods that engage with the drivers of perceptions as part of community-based approaches. In particular, it is important to explicitly account for place, complexity and diversity of environmental risk perceptions, and we reinforce calls to engage seriously with underlying questions of power, culture, identity and practice that influence adaptive capacity and risk perception

    Budgeting based on need: a model to determine sub-national allocation of resources for health services in Indonesia

    Get PDF
    BACKGROUND: Allocating national resources to regions based on need is a key policy issue in most health systems. Many systems utilise proxy measures of need as the basis for allocation formulae. Increasingly these are underpinned by complex statistical methods to separate need from supplier induced utilisation. Assessment of need is then used to allocate existing global budgets to geographic areas. Many low and middle income countries are beginning to use formula methods for funding however these attempts are often hampered by a lack of information on utilisation, relative needs and whether the budgets allocated bear any relationship to cost. An alternative is to develop bottom-up estimates of the cost of providing for local need. This method is viable where public funding is focused on a relatively small number of targeted services. We describe a bottom-up approach to developing a formula for the allocation of resources. The method is illustrated in the context of the state minimum service package mandated to be provided by the Indonesian public health system. METHODS: A standardised costing methodology was developed that is sensitive to the main expected drivers of local cost variation including demographic structure, epidemiology and location. Essential package costing is often undertaken at a country level. It is less usual to utilise the methods across different parts of a country in a way that takes account of variation in population needs and location. Costing was based on best clinical practice in Indonesia and province specific data on distribution and costs of facilities. The resulting model was used to estimate essential package costs in a representative district in each province of the country. FINDINGS: Substantial differences in the costs of providing basic services ranging from USD 15 in urban Yogyakarta to USD 48 in sparsely populated North Maluku. These costs are driven largely by the structure of the population, particularly numbers of births, infants and children and also key diseases with high cost/prevalence and variation, most notably the level of malnutrition. The approach to resource allocation was implemented using existing data sources and permitted the rapid construction of a needs based formula that is highly specific to the package mandated across the country. Refinement could focus more on resources required to finance demand side costs and expansion of the service package to include priority non-communicable services

    Minimum sample size for developing a multivariable prediction model using multinomial logistic regression

    Get PDF
    Aims Multinomial logistic regression models allow one to predict the risk of a categorical outcome with > 2 categories. When developing such a model, researchers should ensure the number of participants (n)) is appropriate relative to the number of events (Ek)) and the number of predictor parameters (pk) for each category k. We propose three criteria to determine the minimum n required in light of existing criteria developed for binary outcomes. Proposed criteria The first criterion aims to minimise the model overfitting. The second aims to minimise the difference between the observed and adjusted R2 Nagelkerke. The third criterion aims to ensure the overall risk is estimated precisely. For criterion (i), we show the sample size must be based on the anticipated Cox-snell R2 of distinct ‘one-to-one’ logistic regression models corresponding to the sub-models of the multinomial logistic regression, rather than on the overall Cox-snell R2 of the multinomial logistic regression. Evaluation of criteria We tested the performance of the proposed criteria (i) through a simulation study and found that it resulted in the desired level of overfitting. Criterion (ii) and (iii) were natural extensions from previously proposed criteria for binary outcomes and did not require evaluation through simulation. Summary We illustrated how to implement the sample size criteria through a worked example considering the development of a multinomial risk prediction model for tumour type when presented with an ovarian mass. Code is provided for the simulation and worked example. We will embed our proposed criteria within the pmsampsize R library and Stata modules

    Evaluation of clinical prediction models (part 2): how to undertake an external validation study

    Get PDF
    External validation studies are an important but often neglected part of prediction model research. In this article, the second in a series on model evaluation, Riley and colleagues explain what an external validation study entails and describe the key steps involved, from establishing a high quality dataset to evaluating a model’s predictive performance and clinical usefulness
    corecore