275 research outputs found

    A Microsoft-Excel-based tool for running and critically appraising network meta-analyses--an overview and application of NetMetaXL.

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.BACKGROUND: The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. METHODS: We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL's interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. RESULTS: We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. CONCLUSIONS: Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows for more efficient and transparent critical appraisal of network meta-analyses, enhanced standardization of reporting, and integration with health economic evaluations which are frequently Excel-based.CC is a recipient of a Vanier Canada Graduate Scholarship from the Canadian Institutes of Health Research (funding reference number—CGV 121171) and is a trainee on the Canadian Institutes of Health Research Drug Safety and Effectiveness Network team grant (funding reference number—116573). BH is funded by a New Investigator award from the Canadian Institutes of Health Research and the Drug Safety and Effectiveness Network. This research was partly supported by funding from CADTH as part of a project to develop Excel-based tools to support the conduct of health technology assessments. This research was also supported by Cornerstone Research Group

    Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review

    Get PDF
    Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses

    Stress degradation studies and development of stability-indicating TLC-densitometry method for determination of prednisolone acetate and chloramphenicol in their individual and combined pharmaceutical formulations

    Get PDF
    A rapid and reproducible stability indicating TLC method was developed for the determination of prednisolone acetate and chloramphenicol in presence of their degraded products. Uniform degradation conditions were maintained by refluxing sixteen reaction mixtures for two hours at 80°C using parallel synthesizer including acidic, alkaline and neutral hydrolysis, oxidation and wet heating degradation. Oxidation at room temperature, photochemical and dry heating degradation studies were also carried out. Separation was done on TLC glass plates, pre-coated with silica gel 60F-254 using chloroform: methanol (14:1 v/v). Spots at Rf 0.21 ± 0.02 and Rf 0.41 ± 0.03 were recognized as chloramphenicol and prednisolone acetate, respectively. Quantitative analysis was done through densitometric measurements at multiwavelength (243 nm, λmax of prednisolone acetate and 278 nm, λmax of chloramphenicol), simultaneously. The developed method was optimized and validated as per ICH guidelines. Method was found linear over the concentration range of 200-6000 ng/spot with the correlation coefficient (r2 ± S.D.) of 0.9976 ± 3.5 and 0.9920 ± 2.5 for prednisolone acetate and chloramphenicol, respectively. The developed TLC method can be applied for routine analysis of prednisolone acetate and chloramphenicol in presence of their degraded products in their individual and combined pharmaceutical formulations

    Bayesian Hierarchical Models Combining Different Study Types and Adjusting for Covariate Imbalances: A Simulation Study to Assess Model Performance

    Get PDF
    BACKGROUND: Bayesian hierarchical models have been proposed to combine evidence from different types of study designs. However, when combining evidence from randomised and non-randomised controlled studies, imbalances in patient characteristics between study arms may bias the results. The objective of this study was to assess the performance of a proposed Bayesian approach to adjust for imbalances in patient level covariates when combining evidence from both types of study designs. METHODOLOGY/PRINCIPAL FINDINGS: Simulation techniques, in which the truth is known, were used to generate sets of data for randomised and non-randomised studies. Covariate imbalances between study arms were introduced in the non-randomised studies. The performance of the Bayesian hierarchical model adjusted for imbalances was assessed in terms of bias. The data were also modelled using three other Bayesian approaches for synthesising evidence from randomised and non-randomised studies. The simulations considered six scenarios aimed at assessing the sensitivity of the results to changes in the impact of the imbalances and the relative number and size of studies of each type. For all six scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were unbiased and closest to the true value compared to the other models. CONCLUSIONS/SIGNIFICANCE: Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs, the proposed hierarchical Bayesian method adjusted for differences in patient characteristics between study arms may facilitate the optimal use of all available evidence leading to unbiased results compared to unadjusted analyses

    Multi-parallel qPCR provides increased sensitivity and diagnostic breadth for gastrointestinal parasites of humans: field-based inferences on the impact of mass deworming

    Get PDF
    BACKGROUND: Although chronic morbidity in humans from soil transmitted helminth (STH) infections can be reduced by anthelmintic treatment, inconsistent diagnostic tools make it difficult to reliably measure the impact of deworming programs and often miss light helminth infections. METHODS: Cryopreserved stool samples from 796 people (aged 2-81 years) in four villages in Bungoma County, western Kenya, were assessed using multi-parallel qPCR for 8 parasites and compared to point-of-contact assessments of the same stools by the 2-stool 2-slide Kato-Katz (KK) method. All subjects were treated with albendazole and all Ascaris lumbricoides expelled post-treatment were collected. Three months later, samples from 633 of these people were re-assessed by both qPCR and KK, re-treated with albendazole and the expelled worms collected. RESULTS: Baseline prevalence by qPCR (n = 796) was 17 % for A. lumbricoides, 18 % for Necator americanus, 41 % for Giardia lamblia and 15% for Entamoeba histolytica. The prevalence was <1% for Trichuris trichiura, Ancylostoma duodenale, Strongyloides stercoralis and Cryptosporidium parvum. The sensitivity of qPCR was 98% for A. lumbricoides and N. americanus, whereas KK sensitivity was 70% and 32%, respectively. Furthermore, qPCR detected infections with T. trichiura and S. stercoralis that were missed by KK, and infections with G. lamblia and E. histolytica that cannot be detected by KK. Infection intensities measured by qPCR and by KK were correlated for A. lumbricoides (r = 0.83, p < 0.0001) and N. americanus (r = 0.55, p < 0.0001). The number of A. lumbricoides worms expelled was correlated (p < 0.0001) with both the KK (r = 0.63) and qPCR intensity measurements (r = 0.60). CONCLUSIONS: KK may be an inadequate tool for stool-based surveillance in areas where hookworm or Strongyloides are common or where intensity of helminth infection is low after repeated rounds of chemotherapy. Because deworming programs need to distinguish between populations where parasitic infection is controlled and those where further treatment is required, multi-parallel qPCR (or similar high throughput molecular diagnostics) may provide new and important diagnostic information

    Ecological Modeling of Aedes aegypti (L.) Pupal Production in Rural Kamphaeng Phet, Thailand

    Get PDF
    Background - Aedes aegypti (L.) is the primary vector of dengue, the most important arboviral infection globally. Until an effective vaccine is licensed and rigorously administered, Ae. aegypti control remains the principal tool in preventing and curtailing dengue transmission. Accurate predictions of vector populations are required to assess control methods and develop effective population reduction strategies. Ae. aegypti develops primarily in artificial water holding containers. Release recapture studies indicate that most adult Ae. aegypti do not disperse over long distances. We expect, therefore, that containers in an area of high development site density are more likely to be oviposition sites and to be more frequently used as oviposition sites than containers that are relatively isolated from other development sites. After accounting for individual container characteristics, containers more frequently used as oviposition sites are likely to produce adult mosquitoes consistently and at a higher rate. To this point, most studies of Ae. aegypti populations ignore the spatial density of larval development sites. Methodology - Pupal surveys were carried out from 2004 to 2007 in rural Kamphaeng Phet, Thailand. In total, 84,840 samples of water holding containers were used to estimate model parameters. Regression modeling was used to assess the effect of larval development site density, access to piped water, and seasonal variation on container productivity. A varying-coefficients model was employed to account for the large differences in productivity between container types. A two-part modeling structure, called a hurdle model, accounts for the large number of zeroes and overdispersion present in pupal population counts. Findings - The number of suitable larval development sites and their density in the environment were the primary determinants of the distribution and abundance of Ae. aegypti pupae. The productivity of most container types increased significantly as habitat density increased. An ecological approach, accounting for development site density, is appropriate for predicting Ae. aegypti population levels and developing efficient vector control program

    Medication administration errors for older people in long-term residential care

    Get PDF
    Background Older people in long-term residential care are at increased risk of medication errors. The purpose of this study was to evaluate a computerised barcode medication management system designed to improve drug administrations in residential and nursing homes, including comparison of error rates and staff awareness in both settings. Methods All medication administrations were recorded prospectively for 345 older residents in thirteen care homes during a 3-month period using the computerised system. Staff were surveyed to identify their awareness of administration errors prior to system introduction. Overall, 188,249 attempts to administer medication were analysed to determine the prevalence of potential medication administration errors (MAEs). Error classifications included attempts to administer medication at the wrong time, to the wrong person or discontinued medication. Analysis compared data at residential and nursing home level and care and nursing staff groups. Results Typically each resident was exposed to 206 medication administration episodes every month and received nine different drugs. Administration episodes were more numerous (p < 0.01) in nursing homes (226.7 per resident) than in residential homes (198.7). Prior to technology introduction, only 12% of staff administering drugs reported they were aware of administration errors being averted in their care home. Following technology introduction, 2,289 potential MAEs were recorded over three months. The most common MAE was attempting to give medication at the wrong time. On average each resident was exposed to 6.6 potential errors. In total, 90% of residents were exposed to at least one MAE with over half (52%) exposed to serious errors such as attempts to give medication to the wrong resident. MAEs rates were significantly lower (p < 0.01) in residential homes than nursing homes. The level of non-compliance with system alerts was low in both settings (0.075% of administrations) demonstrating virtually complete error avoidance. Conclusion Potentially inappropriate administration of medication is a serious problem in long-term residential care. A computerised barcode system can accurately and automatically detect inappropriate attempts to administer drugs to residents. This tool can reliably be used by care staff as well as nurses to improve quality of care and patient safety

    XRCC1 gene polymorphisms in a population sample and in women with a family history of breast cancer from Rio de Janeiro (Brazil)

    Get PDF
    The X-ray repair cross-complementing Group1 (XRCC1) gene has been defined as essential in the base excision repair (BER) and single-strand break repair processes. This gene is highly polymorphic, and the most extensively studied genetic changes are in exon 6 (Arg194Trp) and in exon 10 (Arg399Gln). These changes, in conserved protein sites, may alter the base excision repair capacity, increasing the susceptibility to adverse health conditions, including cancer. In the present study, we estimated the frequencies of the XRCC1 gene polymorphisms Arg194Trp and Arg399Gln in healthy individuals and also in women at risk of breast cancer due to family history from Rio de Janeiro. The common genotypes in both positions (194 and 399) were the most frequent in this Brazilian sample. Although the 194Trp variant was overrepresented in women reporting familial cases of breast cancer, no statistically significant differences concerning genotype distribution or intragenic interactions were found between this group and the controls. Thus, in the population analyzed by us, variants Arg194Trp and Arg399Gln did not appear to have any impact on breast cancer susceptibility

    XRCC1 gene polymorphisms in a population sample and in women with a family history of breast cancer from Rio de Janeiro (Brazil)

    Get PDF
    The X-ray repair cross-complementing Group1 (XRCC1) gene has been defined as essential in the base excision repair (BER) and single-strand break repair processes. This gene is highly polymorphic, and the most extensively studied genetic changes are in exon 6 (Arg194Trp) and in exon 10 (Arg399Gln). These changes, in conserved protein sites, may alter the base excision repair capacity, increasing the susceptibility to adverse health conditions, including cancer. In the present study, we estimated the frequencies of the XRCC1 gene polymorphisms Arg194Trp and Arg399Gln in healthy individuals and also in women at risk of breast cancer due to family history from Rio de Janeiro. The common genotypes in both positions (194 and 399) were the most frequent in this Brazilian sample. Although the 194Trp variant was overrepresented in women reporting familial cases of breast cancer, no statistically significant differences concerning genotype distribution or intragenic interactions were found between this group and the controls. Thus, in the population analyzed by us, variants Arg194Trp and Arg399Gln did not appear to have any impact on breast cancer susceptibility
    corecore