2,384 research outputs found
Recommended from our members
Killed in action (KIA): an analysis of military personnel who died of their injuries before reaching a definitive medical treatment facility in Afghanistan (2004-2014).
INTRODUCTION: The majority of combat deaths occur before arrival at a medical treatment facility but no previous studies have comprehensively examined this phase of care. METHODS: The UK Joint Theatre Trauma Registry was used to identify all UK military personnel who died in Afghanistan (2004-2014). These data were linked to non-medical tactical and operational records to provide an accurate timeline of events. Cause of death was determined from records taken at postmortem review. The primary objective was to report time between injury and death in those killed in action (KIA); secondary objectives included: reporting mortality at key North Atlantic Treaty Organisation timelines (0, 10, 60, 120 min), comparison of temporal lethality for different anatomical injuries and analysing trends in the case fatality rate (CFR). RESULTS: 2413 UK personnel were injured in Afghanistan from 2004 to 2014; 448 died, with a CFR of 18.6%. 390 (87.1%) of these died prehospital (n=348 KIA, n=42 killed non-enemy action). Complete data were available for n=303 (87.1%) KIA: median Injury Severity Score 75.0 (IQR 55.5-75.0). The predominant mechanisms were improvised explosive device (n=166, 54.8%) and gunshot wound (n=96, 31.7%).In the KIA cohort, the median time to death was 0.0 (IQR 0.0-21.8) min; 173 (57.1%) died immediately (0 min). At 10, 60 and 120 min post injury, 205 (67.7%), 277 (91.4%) and 300 (99.0%) casualties were dead, respectively. Whole body primary injury had the fastest mortality. Overall prehospital CFR improved throughout the period while in-hospital CFR remained constant. CONCLUSION: Over two-thirds of KIA deaths occurred within 10 min of injury. Improvement in the CFR in Afghanistan was predominantly in the prehospital phase
Risk of Hip Fracture in Meat Eaters, Pescatarians, and Vegetarians: A Prospective Cohort Study of 413,914 UK Biobank Participants
Background: Meat-free diets may be associated with a higher risk of hip fracture, but prospective evidence is limited. We aimed to investigate the risk of hip fracture in occasional meat-eaters, pescatarians, and vegetarians compared to regular meat-eaters in the UK Biobank, and to explore the role of potential mediators of any observed risk differences.
Methods: Middle-aged UK adults were classified as regular meat-eaters (n=258,765), occasional meat-eaters (n=137,954), pescatarians (n=9557), or vegetarians (n=7638) based on dietary and lifestyle information at recruitment (2006-2010). Incident hip fractures were identified by record linkage to Hospital Episode Statistics up to September 2021. Multivariable Cox regression models were used to estimate associations between each diet group and hip fracture risk, with regular meat-eaters as the reference group, over a median follow-up time of 12.5 years.
Findings: Among 413,914 women, 3503 hip fractures were observed. After adjustment for confounders, vegetarians (HR (95% CI): 1·50 (1·18, 1·91)) but not occasional meat-eaters (0·99 (0·93, 1·07)) or pescatarians (1·08 (0·86, 1·35)) had a greater risk of hip fracture than regular meat-eaters. This is equivalent to an adjusted absolute risk difference of 3·2 (1·2, 5·8) more hip fractures per 1000 people over 10 years in vegetarians. There was limited evidence of effect modification by BMI on hip fracture risk across diet groups (pinteraction = 0·08), and no clear evidence of effect modification by age or sex (pinteraction = 0·9 and 0·3, respectively). Mediation analyses suggest that BMI explained 28% of the observed risk difference between vegetarians and regular meat-eaters (95% CI: 1·1%, 69·8%).
Interpretation: Vegetarian men and women had a higher risk of hip fracture than regular meat-eaters, and this was partly explained by their lower BMI. Ensuring adequate nutrient intakes and weight management are therefore particularly important in vegetarians in the context of hip fracture prevention
A Case of Urogenital Human Schistosomiasis from a Non-endemic Area
© 2015 Calvo-Cano et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. The attached file is the published version of the article
Labetalol Versus Nifedipine as Antihypertensive Treatment for Chronic Hypertension in Pregnancy: A Randomized Controlled Trial
Data from randomized controlled trials to guide antihypertensive agent choice for chronic hypertension in pregnancy are limited; this study aimed to compare labetalol and nifedipine, additionally assessing the impact of ethnicity on treatment efficacy. Pregnant women with chronic hypertension (12+0-27+6 weeks' gestation) were enrolled at 4 UK centers (August 2014 to October 2015). Open-label first-line antihypertensive treatment was randomly assigned: labetalol- (200-1800 mg/d) or nifedipine-modified release (20-80 mg/d). Analysis included 112 women (98%) who completed the study (labetalol n=55, nifedipine n=57). Maximum blood pressure after randomization was 161/101 mm Hg with labetalol versus 163/105 mm Hg with nifedipine (mean difference systolic: 1.2 mm Hg [-4.9 to 7.2 mm Hg], diastolic: 3.3 mm Hg [-0.6 to 7.3 mm Hg]). Mean blood pressure was 134/84 mm Hg with labetalol and 134/85 mm Hg with nifedipine (mean difference systolic: 0.3 mm Hg [-2.8 to 3.4 mm Hg], and diastolic: -1.9 mm Hg [-4.1 to 0.3 mm Hg]). Nifedipine use was associated with a 7.4-mm Hg reduction (-14.4 to -0.4 mm Hg) in central aortic pressure, measured by pulse wave analysis. No difference in treatment effect was observed in black women (n=63), but a mean 4 mm Hg reduction (-6.6 to -0.8 mm Hg; P=0.015) in brachial diastolic blood pressure was observed with labetalol compared with nifedipine in non-black women (n=49). Labetalol and nifedipine control mean blood pressure to target in pregnant women with chronic hypertension. This study provides support for a larger definitive trial scrutinizing the benefits and side effects of first-line antihypertensive treatment. CLINICAL TRIAL REGISTRATION: URL: https://www.isrctn.com. Unique identifier: ISRCTN40973936
Identification of the factors associated with outcomes in a condition management programme
<p>Background: A requirement of the Government’s Pathways to Work (PtW) agenda was to introduce a Condition Management Programme (CMP). The aim of the present study was to identify the differences between those who engaged and made progress in this telephone-based biopsychosocial intervention, in terms of their health, and those who did not and to determine the client and practitioner characteristics and programme elements associated with success in a programme aimed at improving health.</p>
<p>Methods: Data were obtained from the CMP electronic spreadsheets and clients paper-based case records. CMP
standard practice was that questionnaires were administered during the pre- and post-assessment phases over the
telephone. Each client’s record contains their socio-demographic data, their primary health condition, as well as the pre- and post-intervention scores of the health assessment tool administered. Univariate and multivariate statistical analysis was used to investigate the relationships between the database variables. Clients were included in the study if their records were available for analysis from July 2006 to December 2007.</p>
<p> Results: On average there were 112 referrals per month, totalling 2016 referrals during the evaluation period. The
majority (62.8%) of clients had a mental-health condition. Successful completion of the programme was 28.5% (575
“completers”; 144 “discharges”). Several factors, such as age, health condition, mode of contact, and practitioner
characteristics, were significant determinants of participation and completion of the programme. The results
showed that completion of the CMP was associated with a better mental-health status, by reducing the number of
clients that were either anxious, depressed or both, before undertaking the programme, from 74% to 32.5%.</p>
<p>Conclusions: Our findings showed that an individual's characteristics are associated with success in the
programme, defined as completing the intervention and demonstrating an improved health status. This study
provides some evidence that the systematic evaluation of such programmes and interventions could identify ways
in which they could be improved.</p>
Managing change in the nursing handover from traditional to bedside handover – a case study from Mauritius
BACKGROUND: The shift handover forms an important part of the communication process that takes place twice within the nurses' working day in the gynaecological ward. This paper addresses the topic of implementing a new system of bedside handover, which puts patients central to the whole process of managing care and also addresses some of the shortcomings of the traditional handover system. METHODS: A force field analysis in terms of the driving forces had shown that there was dissatisfaction with the traditional method of handover which had led to an increase in the number of critical incidents and complaints from patients, relatives and doctors. The restraining forces identified were a fear of accountability, lack of confidence and that this change would lead to more work. A 3 – step planned change model consisting of unfreezing, moving and refreezing was used to guide us through the change process. Resistance to change was managed by creating a climate of open communication where stakeholders were allowed to voice opinions, share concerns, insights, and ideas thereby actively participating in decision making. RESULTS: An evaluation had shown that this process was successfully implemented to the satisfaction of patients, and staff in general. CONCLUSION: This successful change should encourage other nurses to become more proactive in identifying areas for change management in order to improve our health care system
Wanted dead or alive : high diversity of macroinvertebrates associated with living and ’dead’ Posidonia oceanica matte
The Mediterranean endemic seagrass Posidonia
oceanica forms beds characterised by a dense leaf canopy
and a thick root-rhizome ‘matte’. Death of P. oceanica
shoots leads to exposure of the underlying matte, which
can persist for many years, and is termed ‘dead’ matte.
Traditionally, dead matte has been regarded as a degraded
habitat. To test whether this assumption was
true, the motile macroinvertebrates of adjacent living
(with shoots) and dead (without shoots) matte of
P. oceanica were sampled in four different plots located
at the same depth (5–6 m) in Mellieha Bay, Malta
(central Mediterranean). The total number of species
and abundance were significantly higher (ANOVA;
P<0.05 and P<0.01, respectively) in the dead matte
than in living P. oceanica matte, despite the presence of
the foliar canopy in the latter. Multivariate analysis
(MDS) clearly showed two main groups of assemblages,
corresponding to the two matte types. The amphipods
Leptocheirus guttatus and Maera grossimana, and the
polychaete Nereis rava contributed most to the dissimilarity
between the two different matte types. Several
unique properties of the dead matte contributing to the
unexpected higher number of species and abundance of
motile macroinvertebrates associated with this habitat
are discussed. The findings have important implications
for the conservation of bare P. oceanica matte, which
has been generally viewed as a habitat of low ecological
value.peer-reviewe
Multi-parallel qPCR provides increased sensitivity and diagnostic breadth for gastrointestinal parasites of humans: field-based inferences on the impact of mass deworming
BACKGROUND: Although chronic morbidity in humans from soil transmitted helminth (STH) infections can be reduced by anthelmintic treatment, inconsistent diagnostic tools make it difficult to reliably measure the impact of deworming programs and often miss light helminth infections. METHODS: Cryopreserved stool samples from 796 people (aged 2-81 years) in four villages in Bungoma County, western Kenya, were assessed using multi-parallel qPCR for 8 parasites and compared to point-of-contact assessments of the same stools by the 2-stool 2-slide Kato-Katz (KK) method. All subjects were treated with albendazole and all Ascaris lumbricoides expelled post-treatment were collected. Three months later, samples from 633 of these people were re-assessed by both qPCR and KK, re-treated with albendazole and the expelled worms collected. RESULTS: Baseline prevalence by qPCR (n = 796) was 17 % for A. lumbricoides, 18 % for Necator americanus, 41 % for Giardia lamblia and 15% for Entamoeba histolytica. The prevalence was <1% for Trichuris trichiura, Ancylostoma duodenale, Strongyloides stercoralis and Cryptosporidium parvum. The sensitivity of qPCR was 98% for A. lumbricoides and N. americanus, whereas KK sensitivity was 70% and 32%, respectively. Furthermore, qPCR detected infections with T. trichiura and S. stercoralis that were missed by KK, and infections with G. lamblia and E. histolytica that cannot be detected by KK. Infection intensities measured by qPCR and by KK were correlated for A. lumbricoides (r = 0.83, p < 0.0001) and N. americanus (r = 0.55, p < 0.0001). The number of A. lumbricoides worms expelled was correlated (p < 0.0001) with both the KK (r = 0.63) and qPCR intensity measurements (r = 0.60). CONCLUSIONS: KK may be an inadequate tool for stool-based surveillance in areas where hookworm or Strongyloides are common or where intensity of helminth infection is low after repeated rounds of chemotherapy. Because deworming programs need to distinguish between populations where parasitic infection is controlled and those where further treatment is required, multi-parallel qPCR (or similar high throughput molecular diagnostics) may provide new and important diagnostic information
Physically active academic lessons; Acceptance, barriers and facilitators for implementation
Background
To improve health and academic learning in schoolchildren, the Active School programme in Stavanger, Norway has introduced physically active academic lessons. This is a teaching method combining physical activity with academic content. The purpose of this paper was to evaluate the response to the physically active lessons and identify facilitators and barriers for implementation of such an intervention.
Methods
Five school leaders (principals or vice-principals), 13 teachers and 30 children from the five intervention schools were interviewed about their experiences with the 10-month intervention, which consisted of weekly minimum 2 × 45 minutes of physically active academic lessons, and the factors affecting its implementation. All interviews were transcribed and analysed using the qualitative data analysis program NVivo 10 (QSR international, London, UK). In addition, weekly teacher’s intervention delivery logs were collected and analysed.
Results
On average, the physically active academic lessons in 18 of the 34 weeks (53%) were reported in the teacher logs. The number of delivered physically active academic lessons covered 73% of the schools’ planned activity. Physically active lessons were well received among school leaders, teachers and children. The main facilitators for implementation of the physically active lessons were active leadership and teacher support, high self-efficacy regarding mastering the intervention, ease of organizing physically active lessons, inclusion of physically active lessons into the lesson curricula, and children’s positive reception of the intervention. The main barriers were unclear expectations, lack of knowledge and time to plan the physiclly active lessons, and the length of the physically active lessons (15–20 min lessons were preferred over the 45 min lessons).
Conclusion
Physically active academic lessons were considered an appropriate pedagogical method for creating positive variation, and were highly appreciated among both teachers and children. Both the principal and the teachers should be actively involved the implementation, which could be strengthened by including physical activity into the school’s strategy. Barriers for implementing physically active lessons in schools could be lowered by increasing implementation clarity and introducing the teachers to high quality and easily organized lessons.publishedVersio
- …