29 research outputs found

    Occupational lung diseases among former goldminers in two labour sending areas

    Get PDF
    Objectives. To compare and contrast the prevalence of pneumoconiosis in two groups of former migrant mineworkers in southern Africa, and to examine the effectiveness of the South African compensation system for occupational lung diseases.Design. Comparison of two cross-sectional studies and follow-up data on compensation results.Setting. The village of Thamaga, Botswana and the rural area of Libode, Eastern Cape, South Africa.Subjects. Two hundred and thirty-four former underground mineworkers in Thamaga, and 238 in Libode. Main outcome measures. Prevalence and severity of pneumoconiosis, prevalence of radiological signs of tuberculosis (TB), Medical Bureau for Occupational Diseases (MBOD) certification committee decisions, and compensation results.Results. Prevalence of pneumoconiosis ≥ 2/ 1 was 15.4% in Libode and 13.6% in Thamaga. Significantly more Libode than Thamaga subjects (51.1% versus 29.0%) reported past TB treatment Radiological signs of pulmonary TB were also more prevalent in Libode (33.3% v. 23.9%). Twenty-six per cent of Libode men and 16.1% of Thamaga men were certified with compensable disease. Libode payments were finalised within 30 months, whereas Thamaga cases only began receiving payments 52 months after medical  examination, with 11 cases still pending 66 months after medical examination.Conclusion. There was a high prevalence of pneumoconiosis in both study groups. Many men were eligible for compensation but were previously uncompensated. The higher rate of compensable disease in the Libode group may relate to the higher prevalence of TB, as well as more active follow-up by the study group, including a large number of appeals. Socio-political changes in South Africa between 1994 and 1996 may also have influenced compensation results

    Estimating the burden of disease attributable to four selected environmental risk factors in South Africa

    Get PDF
    The first South African National Burden of Disease study quantified the underlying causes of premature mortality and morbidity experienced in South Africa in the year 2000. This was followed by a Comparative Risk Assessment to estimate the contributions of 17 selected risk factors to burden of disease in South Africa. This paper describes the health impact of exposure to four selected environmental risk factors: unsafe water, sanitation and hygiene; indoor air pollution from household use of solid fuels; urban outdoor air pollution and lead exposure.The study followed World Health Organization comparative risk assessment methodology. Population-attributable fractions were calculated and applied to revised burden of disease estimates (deaths and disability adjusted life years, [DALYs]) from the South African Burden of Disease study to obtain the attributable burden for each selected risk factor. The burden attributable to the joint effect of the four environmental risk factors was also estimated taking into account competing risks and common pathways. Monte Carlo simulation-modeling techniques were used to quantify sampling, uncertainty.Almost 24 000 deaths were attributable to the joint effect of these four environmental risk factors, accounting for 4.6% (95% uncertainty interval 3.8-5.3%) of all deaths in South Africa in 2000. Overall the burden due to these environmental risks was equivalent to 3.7% (95% uncertainty interval 3.4-4.0%) of the total disease burden for South Africa, with unsafe water sanitation and hygiene the main contributor to joint burden. The joint attributable burden was especially high in children under 5 years of age, accounting for 10.8% of total deaths in this age group and 9.7% of burden of disease.This study highlights the public health impact of exposure to environmental risks and the significant burden of preventable disease attributable to exposure to these four major environmental risk factors in South Africa. Evidence-based policies and programs must be developed and implemented to address these risk factors at individual, household, and community levels

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    Meta-analysis of the effects of predation on animal prey abundance: evidence from UK vertebrates

    Get PDF
    Background: Controlling vertebrate predators is one of the most widespread forms of wildlife management and it continues to cause conflict between stakeholders worldwide. It is important for managers and policy-makers to make decisions on this issue that are based on the best available scientific evidence. Therefore, it is first important to understand if there is indeed an impact of vertebrate predators on prey, and then to quantify this impact. Methodology/Principal Findings: Using the UK as a case study, we use a meta-analytical approach to review the available evidence to assess the effect of vertebrate predation on animal prey abundance. We find a significant effect of predators on prey abundance across our studies. On average, there is a 1.6 fold increase in prey abundance in the absence of predation. However, we show significant heterogeneity in effect sizes, and discuss how the method of predator control, whether the predator is native or non-native, and aspects of study design, may be potential causes. Conclusions/Significance: Our results allow some cautious policy recommendations to be made regarding the management of predator and prey populations. Meta-analysis is an important tool for understanding general patterns in the effect of predators on prey abundance across studies. Such an approach is especially valuable where management decisions need to be made in the absence of site-specific information
    corecore