560 research outputs found

    Disappeared persons and homicide in El Salvador

    Get PDF
    During 2012–2013, the homicide rate in El Salvador came down from 69.9 to 42.2 per 100,000 population following a government brokered truce between the leaders of the two major gangs, Mara Salvatrucha and Barrio 18. But despite the apparent successes of the truce, it was speculated that the drop in murders could have been due to the killers simply hid the bodies of their victims. This paper aims at determining whether gangs effectively disappeared their victims to cut down the official counts of murders, or they committed these crimes for other reasons. The results from this study suggest that Salvadoran gangs had been using disappearance as a method to gain sustained social control among residents of already gang-dominated areas, that together with homicide, disappearance is part of a process of territorial spread and strategic strengthening by which these groups are enhancing their capabilities to interfere in the alliances of Mexican drug trafficking organizations with Central American criminal organizations specializing in the trans-shipment of drugs and in providing access to local markets to distribute and sell drugs. Our findings show that the risk for disappearance has been large even before the truce was in place and that actually, it continues as such and going through a process of geographic expansion

    Extending DerSimonian and Laird's methodology to perform network meta-analyses with random inconsistency effects.

    Get PDF
    Network meta-analysis is becoming more popular as a way to compare multiple treatments simultaneously. Here, we develop a new estimation method for fitting models for network meta-analysis with random inconsistency effects. This method is an extension of the procedure originally proposed by DerSimonian and Laird. Our methodology allows for inconsistency within the network. The proposed procedure is semi-parametric, non-iterative, fast and highly accessible to applied researchers. The methodology is found to perform satisfactorily in a simulation study provided that the sample size is large enough and the extent of the inconsistency is not very severe. We apply our approach to two real examples.DJ, RT and IRW are employed by the UK Medical Research Council (code U105260558). JB is supported by the UK MRC grant numbers G0902100 and MR/K014811/1.This is the final version of the article. It first appeared from Wiley via http://dx.doi.org/10.1002/sim.675

    Measurement delay associated with the Guardian RT continuous glucose monitoring system.

    Get PDF
    AIMS: Using compartment modelling, we assessed the time delay between blood glucose and sensor glucose measured by the Guardian RT continuous glucose monitoring system in young subjects with Type 1 diabetes (T1D). METHODS: Twelve children and adolescents with T1D treated by continuous subcutaneous insulin infusion (male/female 7/5; age 13.1 +/- 4.2 years; body mass index 21.9 +/- 4.3 kg/m(2); mean +/- sd) were studied over 19 h in a Clinical Research Facility. Guardian RT was calibrated every 6 h and sensor glucose measured every 5 min. Reference blood glucose was measured every 15 min using a YSI 2300 STAT Plus Analyser. A population compartment model of sensor glucose-blood glucose kinetics was adopted to estimate the time delay, the calibration scale and the calibration shift. RESULTS: The population median of the time delay was 15.8 (interquartile range 15.2, 16.5) min, which was corroborated by correlation analysis between blood glucose and 15-min delayed sensor glucose. The delay has a relatively low intersubject variability, with 95% of individuals predicted to have delays between 10.4 and 24.3 min. Population medians (interquartile range) for the scale and shift are 0.800 (0.777, 0.823) (unitless) and 1.66 (1.47, 1.84) mmol/l, respectively. CONCLUSIONS: In young subjects with T1D, the total time delay associated with the Guardian RT system was approximately 15 min. This is twice that expected on physiological grounds, suggesting a 5- to 10-min delay because of data processing. Delays above 25 min are rarely to be observed

    A Microsoft-Excel-based tool for running and critically appraising network meta-analyses--an overview and application of NetMetaXL.

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.BACKGROUND: The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. METHODS: We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL's interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. RESULTS: We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. CONCLUSIONS: Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows for more efficient and transparent critical appraisal of network meta-analyses, enhanced standardization of reporting, and integration with health economic evaluations which are frequently Excel-based.CC is a recipient of a Vanier Canada Graduate Scholarship from the Canadian Institutes of Health Research (funding reference number—CGV 121171) and is a trainee on the Canadian Institutes of Health Research Drug Safety and Effectiveness Network team grant (funding reference number—116573). BH is funded by a New Investigator award from the Canadian Institutes of Health Research and the Drug Safety and Effectiveness Network. This research was partly supported by funding from CADTH as part of a project to develop Excel-based tools to support the conduct of health technology assessments. This research was also supported by Cornerstone Research Group

    Exploring variations in childhood stunting in Nigeria using league table, control chart and spatial analysis

    Get PDF
    Background: Stunting, linear growth retardation is the best measure of child health inequalities as it captures multiple dimensions of children’s health, development and environment where they live. The developmental priorities and socially acceptable health norms and practices in various regions and states within Nigeria remains disaggregated and with this, comes the challenge of being able to ascertain which of the regions and states identifies with either high or low childhood stunting to further investigate the risk factors and make recommendations for action oriented policy decisions. Methods: We used data from the birth histories included in the 2008 Nigeria Demographic and Health Survey (DHS) to estimate childhood stunting. Stunting was defined as height for age below minus two standard deviations from the median height for age of the standard World Health Organization reference population. We plotted control charts of the proportion of childhood stunting for the 37 states (including federal capital, Abuja) in Nigeria. The Local Indicators of Spatial Association (LISA) were used as a measure of the overall clustering and is assessed by a test of a null hypothesis. Results: Childhood stunting is high in Nigeria with an average of about 39%. The percentage of children with stunting ranged from 11.5% in Anambra state to as high as 60% in Kebbi State. Ranking of states with respect to childhood stunting is as follows: Anambra and Lagos states had the least numbers with 11.5% and 16.8% respectively while Yobe, Zamfara, Katsina, Plateau and Kebbi had the highest (with more than 50% of their underfives having stunted growth). Conclusions: Childhood stunting is high in Nigeria and varied significantly across the states. The northern states have a higher proportion than the southern states. There is an urgent need for studies to explore factors that may be responsible for these special cause variations in childhood stunting in Nigeria

    A spatio‑temporal model of homicide in El Salvador

    Get PDF
    This paper examines the spatio-temporal evolution of homicide across the municipalities of El Salvador. It aims at identifying both temporal trends and spatial clusters that may contribute to the formation of time-stable corridors lying behind a historically (recurrent) high homicide rate. The results from this study reveal the presence of significant clusters of high homicide municipalities in the Western part of the country that have remained stable over time, and a process of formation of high homicide clusters in the Eastern region. The results show an increasing homicide trend from 2002 to 2013 with significant municipality-specific differential trends across the country. The data suggests that links may exist between the dynamics of homicide rates, drug trafficking and organized crime

    Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study

    Get PDF
    BACKGROUND: Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. RESULTS: The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. CONCLUSIONS: Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens

    Cerebrospinal fluid metallomics in cerebral amyloid angiopathy: an exploratory analysis.

    Get PDF
    INTRODUCTION: Cerebral amyloid angiopathy (CAA) is associated with symptomatic intracerebral haemorrhage. Biomarkers of clinically silent bleeding events, such as cerebrospinal fluid (CSF) ferritin and iron, might provide novel measures of disease presence and severity. METHODS: We performed an exploratory study comparing CSF iron, ferritin, and other metal levels in patients with CAA, control subjects (CS) and patients with Alzheimer's disease (AD). Ferritin was measured using a latex fixation test; metal analyses were performed using inductively coupled plasma mass spectrometry. RESULTS: CAA patients (n = 10) had higher levels of CSF iron than the AD (n = 20) and CS (n = 10) groups (medians 23.42, 15.48 and 17.71 μg/L, respectively, p = 0.0015); the difference between CAA and AD groups was significant in unadjusted and age-adjusted analyses. We observed a difference in CSF ferritin (medians 10.10, 7.77 and 8.01 ng/ml, for CAA, AD and CS groups, respectively, p = 0.01); the difference between the CAA and AD groups was significant in unadjusted, but not age-adjusted, analyses. We also observed differences between the CAA and AD groups in CSF nickel and cobalt (unadjusted analyses). CONCLUSIONS: In this exploratory study, we provide preliminary evidence for a distinct CSF metallomic profile in patients with CAA. Replication and validation of these results in larger cohorts is needed

    Bayesian Hierarchical Models Combining Different Study Types and Adjusting for Covariate Imbalances: A Simulation Study to Assess Model Performance

    Get PDF
    BACKGROUND: Bayesian hierarchical models have been proposed to combine evidence from different types of study designs. However, when combining evidence from randomised and non-randomised controlled studies, imbalances in patient characteristics between study arms may bias the results. The objective of this study was to assess the performance of a proposed Bayesian approach to adjust for imbalances in patient level covariates when combining evidence from both types of study designs. METHODOLOGY/PRINCIPAL FINDINGS: Simulation techniques, in which the truth is known, were used to generate sets of data for randomised and non-randomised studies. Covariate imbalances between study arms were introduced in the non-randomised studies. The performance of the Bayesian hierarchical model adjusted for imbalances was assessed in terms of bias. The data were also modelled using three other Bayesian approaches for synthesising evidence from randomised and non-randomised studies. The simulations considered six scenarios aimed at assessing the sensitivity of the results to changes in the impact of the imbalances and the relative number and size of studies of each type. For all six scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were unbiased and closest to the true value compared to the other models. CONCLUSIONS/SIGNIFICANCE: Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs, the proposed hierarchical Bayesian method adjusted for differences in patient characteristics between study arms may facilitate the optimal use of all available evidence leading to unbiased results compared to unadjusted analyses
    corecore