10 research outputs found
A comparison of penetration and damage caused by different types of arrowheads on loose and tight fit clothing
Bows and arrows are used more for recreation, sport and hunting in the Western world and tend not to be as popular a weapon as firearms or knives. Yet there are still injuries and fatalities caused by these low-velocity weapons due to their availability to the public and that a licence is not required to own them. This study aimed to highlight the penetration capabilities of aluminium arrows into soft tissue and bones in the presence of clothing. Further from that, how the type and fit of clothing as well as arrowhead type contribute to penetration capacity. In this study ballistic gelatine blocks (non-clothed and loose fit or tight fit clothed) were shot using a 24 lb weight draw recurve bow and aluminium arrows accompanied by four different arrowheads (bullet, judo, blunt and broadhead).The penetration capability of aluminium arrows was examined, and the depth of penetration was found to be dependent on the type of arrowhead used as well as by the type and fit or lack thereof of the clothing covering the block. Loose fit clothing reduced penetration with half of the samples, reducing penetration capacity by percentages between 0% and 98.33%, at a range of 10 m. While the remaining half of the samples covered with tight clothing led to reductions in penetration of between 14.06% and 94.12%.The damage to the clothing and the gelatine (puncturing, cutting and tearing) was affected by the shape of the arrowhead, with the least damaged caused by the blunt arrowheads and the most by the broadhead arrows. Clothing fibres were also at times found within the projectile tract within the gelatine showing potential for subsequent infection of an individual with an arrow wound.Ribs, femur bones and spinal columns encased in some of the gelatine blocks all showed varying levels of damage, with the most and obvious damage being exhibited by the ribs and spinal column.The information gleaned from the damage to clothing, gelatine blocks and bones could potentially be useful for forensic investigators, for example, when a body has been discovered with no weapons or gunshot residue present
Bariatric surgery, lifestyle interventions and orlistat for severe obesity : the REBALANCE mixed-methods systematic review and economic evaluation
Funding: The National Institute for Health Research Health Technology Assessment programme. The Health Services Research Unit and Health Economics Research Unit are core funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorate. Corrigendum: Bariatric surgery, lifestyle interventions and orlistat for severe obesity: the REBALANCE mixed-methods systematic review and economic evaluation. Alison Avenell, Clare Robertson, Zoë Skea, Elisabet Jacobsen, Dwayne Boyers, David Cooper, Magaly Aceves-Martins, Lise Retat, Cynthia Fraser, Paul Aveyard, Fiona Stewart, Graeme MacLennan, Laura Webber, Emily Corbould, Benshuai Xu, Abbygail Jaccard, Bonnie Boyle, Eilidh Duncan, Michal Shimonovich, Marijn de Bruin, 2020, vol. 22, issue 68, p. 247-250. Health technology assessment (Winchester, England) Link to publication in Scopus. DOI.http://dx.doi.org/10.3310/hta22680-c202005Peer reviewedPublisher PD
Physical function endpoints in cancer cachexia clinical trials: Systematic Review 1 of the cachexia endpoints series
In cancer cachexia trials, measures of physical function are commonly used as endpoints. For drug trials to obtain regulatory approval, efficacy in physical function endpoints may be needed alongside other measures. However, it is not clear which physical function endpoints should be used. The aim of this systematic review was to assess the frequency and diversity of physical function endpoints in cancer cachexia trials. Following a comprehensive electronic literature search of MEDLINE, Embase and Cochrane (1990-2021), records were retrieved. Eligible trials met the following criteria: adults (≥18 years), controlled design, more than 40 participants, use of a cachexia intervention for more than 14 days and use of a physical function endpoint. Physical function measures were classified as an objective measure (hand grip strength [HGS], stair climb power [SCP], timed up and go [TUG] test, 6-min walking test [6MWT] and short physical performance battery [SPPB]), clinician assessment of function (Karnofsky Performance Status [KPS] or Eastern Cooperative Oncology Group-Performance Status [ECOG-PS]) or patient-reported outcomes (physical function subscale of the European Organisation for the Research and Treatment of Cancer Quality of Life Questionnaires [EORTC QLQ-C30 or C15]). Data extraction was performed using Covidence and followed PRISMA guidance (PROSPERO registration: CRD42022276710). A total of 5975 potential studies were examined and 71 were eligible. Pharmacological interventions were assessed in 38 trials (54%). Of these, 11 (29%, n = 1184) examined megestrol and 5 (13%, n = 1928) examined anamorelin; nutritional interventions were assessed in 21 trials (30%); and exercise-based interventions were assessed in 6 trials (8%). The remaining six trials (8%) assessed multimodal interventions. Among the objective measures of physical function (assessed as primary or secondary endpoints), HGS was most commonly examined (33 trials, n = 5081) and demonstrated a statistically significant finding in 12 (36%) trials (n = 2091). The 6MWT was assessed in 12 trials (n = 1074) and was statistically significant in 4 (33%) trials (n = 403), whereas SCP, TUG and SPPB were each assessed in 3 trials. KPS was more commonly assessed than the newer ECOG-PS (16 vs. 9 trials), and patient-reported EORTC QLQ-C30 physical function was reported in 25 trials. HGS is the most commonly used physical function endpoint in cancer cachexia clinical trials. However, heterogeneity in study design, populations, intervention and endpoint selection make it difficult to comment on the optimal endpoint and how to measure this. We offer several recommendations/considerations to improve the design of future clinical trials in cancer cachexia
Physical Function Endpoints in Cancer Cachexia Trials; Systematic Review 1 of the Cachexia Endpoints Series
Abstract In cancer cachexia trials, measures of physical function are commonly used as endpoints. For drug trials to obtain regulatory approval, efficacy in physical function endpoints may be needed alongside other measures. However, it is not clear which physical function endpoints should be used. The aim of this systematic review was to assess the frequency and diversity of physical function endpoints in cancer cachexia trials. Following a comprehensive electronic literature search of MEDLINE, Embase and Cochrane (1990–2021), records were retrieved. Eligible trials met the following criteria: adults (≥18 years), controlled design, more than 40 participants, use of a cachexia intervention for more than 14 days and use of a physical function endpoint. Physical function measures were classified as an objective measure (hand grip strength [HGS], stair climb power [SCP], timed up and go [TUG] test, 6‐min walking test [6MWT] and short physical performance battery [SPPB]), clinician assessment of function (Karnofsky Performance Status [KPS] or Eastern Cooperative Oncology Group‐Performance Status [ECOG‐PS]) or patient‐reported outcomes (physical function subscale of the European Organisation for the Research and Treatment of Cancer Quality of Life Questionnaires [EORTC QLQ‐C30 or C15]). Data extraction was performed using Covidence and followed PRISMA guidance (PROSPERO registration: CRD42022276710). A total of 5975 potential studies were examined and 71 were eligible. Pharmacological interventions were assessed in 38 trials (54%). Of these, 11 (29%, n = 1184) examined megestrol and 5 (13%, n = 1928) examined anamorelin; nutritional interventions were assessed in 21 trials (30%); and exercise‐based interventions were assessed in 6 trials (8%). The remaining six trials (8%) assessed multimodal interventions. Among the objective measures of physical function (assessed as primary or secondary endpoints), HGS was most commonly examined (33 trials, n = 5081) and demonstrated a statistically significant finding in 12 (36%) trials (n = 2091). The 6MWT was assessed in 12 trials (n = 1074) and was statistically significant in 4 (33%) trials (n = 403), whereas SCP, TUG and SPPB were each assessed in 3 trials. KPS was more commonly assessed than the newer ECOG‐PS (16 vs. 9 trials), and patient‐reported EORTC QLQ‐C30 physical function was reported in 25 trials. HGS is the most commonly used physical function endpoint in cancer cachexia clinical trials. However, heterogeneity in study design, populations, intervention and endpoint selection make it difficult to comment on the optimal endpoint and how to measure this. We offer several recommendations/considerations to improve the design of future clinical trials in cancer cachexia
Appetite and dietary intake endpoints in cancer cachexia clinical trials: Systematic Review 2 of the cachexia endpoints series
There is no consensus on the optimal endpoint(s) in cancer cachexia trials. Endpoint variation is an obstacle when comparing interventions and their clinical value. The aim of this systematic review was to summarize and evaluate endpoints used to assess appetite and dietary intake in cancer cachexia clinical trials. A search for studies published from 1 January 1990 until 2 June 2021 was conducted using MEDLINE, Embase and Cochrane Central Register of Controlled Trials. Eligible studies examined cancer cachexia treatment versus a comparator in adults with assessments of appetite and/or dietary intake as study endpoints, a sample size ≥40 and an intervention lasting ≥14 days. Reporting was in line with PRISMA guidance, and a protocol was published in PROSPERO (2022 CRD42022276710). This review is part of a series of systematic reviews examining cachexia endpoints. Of the 5975 articles identified, 116 were eligible for the wider review series and 80 specifically examined endpoints of appetite (65 studies) and/or dietary intake (21 studies). Six trials assessed both appetite and dietary intake. Appetite was the primary outcome in 15 trials and dietary intake in 7 trials. Median sample size was 101 patients (range 40–628). Forty-nine studies included multiple primary tumour sites, while 31 studies involved single primary tumour sites (15 gastrointestinal, 7 lung, 7 head and neck and 2 female reproductive organs). The most frequently reported appetite endpoints were visual analogue scale (VAS) and numerical rating scale (NRS) (40%). The appetite item from the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ) C30/C15 PAL (38%) and the appetite question from North Central Cancer Treatment Group anorexia questionnaire (17%) were also frequently applied. Of the studies that assessed dietary intake, 13 (62%) used food records (prospective registrations) and 10 (48%) used retrospective methods (24-h recall or dietary history). For VAS/NRS, a mean change of 1.3 corresponded to Hedge's g of 0.5 and can be considered a moderate change. For food records, a mean change of 231 kcal/day or 11 g of protein/day corresponded to a moderate change. Choice of endpoint in cachexia trials will depend on factors pertinent to the trial to be conducted. Nevertheless, from trials assessed and available literature, NRS or EORTC QLQ C30/C15 PAL seems suitable for appetite assessments. Appetite and dietary intake endpoints are rarely used as primary outcomes in cancer cachexia. Dietary intake assessments were used mainly to monitor compliance and are not validated in cachexia populations. Given the importance to cachexia studies, dietary intake endpoints must be validated before they are used as endpoints in clinical trials
Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study
Purpose:
Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom.
Methods:
Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded.
Results:
The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia.
Conclusion:
We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes
Individual and neighbourhood-level deprivation, kidney disease, and long-term mortality in the "Core determinants and Equity Grampian Laboratory Outcomes Morbidity and Mortality Study” (GLOMMS-CORE)
Prospective cohort studies of kidney equity are limited by a focus on advanced rather than early disease; and selective recruitment. Whole population studies frequently rely on area-level measures of deprivation as opposed to individual measures of social disadvantage.We linked kidney health and individual census records in the North of Scotland, 2011-2021 (GLOMMS-CORE). We identified incident kidney presentations at thresholds of estimated glomerular filtration rate (eGFR) <60 (mild/early), <45 (moderate), <30 ml/min/1.73m2 (advanced), and acute kidney disease (AKD). We compare household and neighbourhood socioeconomic measures, living circumstances, and long-term mortality. We used case-mix adjusted multivariable logistic regression (for living circumstances), and Cox models (for mortality) incorporating an interaction between household and neighbourhood.Among 458897 census respondents, there were 48546, 29081, 16116, 28097 incident presentations of eGFR <60, <45, <30 and AKD; mean ages 70-77; 52-55% female. Classifications of socioeconomic position by household and neighbourhood were related but complex, and frequently did not match. Compared to households of “professionals”, people with early kidney disease in “unskilled” or “unemployed” households had increased mortality (adjusted hazard ratios, HR, and 95% confidence intervals, CI: 1.26, 1.19-1.32 and 1.77, 1.60-1.96). Those with either a deprived household or deprived neighbourhood experienced greater mortality, but those with both dimensions had the poorest outcomes. “Unskilled” and “unemployed” households more frequently reported being “limited a lot” by ill health, adverse mental health, living alone, basic accommodation, no car, English language difficulties, visual and hearing impairments.The impacts of deprivation on kidney health are spread throughout society, complex, serious, and not confined to those living in deprived neighbourhoods
Individual and neighbourhood-level deprivation, kidney disease, and long-term mortality in the “Core determinants and Equity Grampian Laboratory Outcomes Morbidity and Mortality Study” (GLOMMS-CORE)
Prospective cohort studies of kidney equity are limited by a focus on advanced rather than early disease; and selective recruitment. Whole population studies frequently rely on area-level measures of deprivation as opposed to individual measures of social disadvantage. We linked kidney health and individual census records in the North of Scotland, 2011-2021 (GLOMMS-CORE). We identified incident kidney presentations at thresholds of estimated glomerular filtration rate (eGFR) <60 (mild/early), <45 (moderate), <30 ml/min/1.73m 2 (advanced), and acute kidney disease (AKD). We compare household and neighbourhood socioeconomic measures, living circumstances, and long-term mortality. We used case-mix adjusted multivariable logistic regression (for living circumstances), and Cox models (for mortality) incorporating an interaction between household and neighbourhood. Among 458897 census respondents, there were 48546, 29081, 16116, 28097 incident presentations of eGFR <60, <45, <30 and AKD; mean ages 70-77; 52-55% female. Classifications of socioeconomic position by household and neighbourhood were related but complex, and frequently did not match. Compared to households of “professionals”, people with early kidney disease in “unskilled” or “unemployed” households had increased mortality (adjusted hazard ratios, HR, and 95% confidence
intervals, CI: 1.26, 1.19-1.32 and 1.77, 1.60-1.96). Those with either a deprived household or deprived neighbourhood experienced greater mortality, but those with both dimensions had the poorest outcomes. “Unskilled” and “unemployed” households more frequently reported being “limited a lot” by ill health, adverse mental health, living alone, basic accommodation, no car, English language difficulties, visual and hearing impairments. The impacts of deprivation on kidney health are spread throughout society, complex, serious, and not confined to those living in deprived neighbourhoods
Individual and neighborhood-level social and deprivation factors impact kidney health in the GLOMMS-CORE study
Prospective cohort studies of kidney equity are limited by a focus on advanced rather than early disease and selective recruitment. Whole population studies frequently rely on area-level measures of deprivation as opposed to individual measures of social disadvantage. Here, we linked kidney health and individual census records in the North of Scotland (Grampian area), 2011-2021 (GLOMMS-CORE) and identified incident kidney presentations at thresholds of estimated glomerular filtration rate (eGFR) under 60 (mild/early), under 45 (moderate), under 30 ml/min/1.73m2(advanced), and acute kidney disease (AKD). Household and neighborhood socioeconomic measures, living circumstances, and long-term mortality were compared. Case-mix adjusted multivariable logistic regression (living circumstances), and Cox models (mortality) incorporating an interaction between the household and the neighborhood were used. Among census respondents, there were 48546, 29081, 16116, 28097 incident presentations of each respective eGFR cohort and AKD. Classifications of socioeconomic position by household and neighborhood were related but complex, and frequently did not match. Compared to households of professionals, people with early kidney disease in unskilled or unemployed households had increased mortality (adjusted hazard ratios: 95% confidence intervals) of (1.26: 1.19-1.32) and (1.77: 1.60-1.96), respectively with adjustment for neighborhood indices making little difference. Those within either a deprived household or deprived neighborhood experienced greater mortality, but those within both had the poorest outcomes. Unskilled and unemployed households frequently reported being limited by illness, adverse mental health, living alone, basic accommodation, lack of car ownership, language difficulties, visual and hearing impairments. Thus, impacts of deprivation on kidney health are spread throughout society, complex, serious, and not confined to those living in deprived neighborhoods
Large-scale migration into Britain during the Middle to Late Bronze Age
Present-day people from England and Wales harbour more ancestry derived from Early European Farmers (EEF) than people of the Early Bronze Age . To understand this, we generated genome-wide data from 793 individuals, increasing data from the Middle to Late Bronze and Iron Age in Britain by 12-fold, and Western and Central Europe by 3.5-fold. Between 1000 and 875 BC, EEF ancestry increased in southern Britain (England and Wales) but not northern Britain (Scotland) due to incorporation of migrants who arrived at this time and over previous centuries, and who were genetically most similar to ancient individuals from France. These migrants contributed about half the ancestry of Iron Age people of England and Wales, thereby creating a plausible vector for the spread of early Celtic languages into Britain. These patterns are part of a broader trend of EEF ancestry becoming more similar across central and western Europe in the Middle to Late Bronze Age, coincident with archaeological evidence of intensified cultural exchange . There was comparatively less gene flow from continental Europe during the Iron Age, and Britain's independent genetic trajectory is also reflected in the rise of the allele conferring lactase persistence to ~50% by this time compared to ~7% in central Europe where it rose rapidly in frequency only a millennium later. This suggests that dairy products were used in qualitatively different ways in Britain and in central Europe over this period. [Abstract copyright: © 2021. The Author(s), under exclusive licence to Springer Nature Limited.