2,165 research outputs found
GPs' willingness to prescribe aspirin for cancer preventive therapy in Lynch syndrome: a factorial randomised trial investigating factors influencing decisions.
BACKGROUND: The National Institute for Health and Care Excellence (NICE) 2020 guidelines recommends aspirin for colorectal cancer prevention for people with Lynch syndrome. Strategies to change practice should be informed by understanding the factors influencing prescribing. AIM: To investigate the optimal type and level of information to communicate with GPs to increase willingness to prescribe aspirin. DESIGN AND SETTING: GPs in England and Wales (n = 672) were recruited to participate in an online survey with a 23 factorial design. GPs were randomised to one of eight vignettes describing a hypothetical patient with Lynch syndrome recommended to take aspirin by a clinical geneticist. METHOD: Across the vignettes, the presence or absence of three types of information was manipulated: 1) existence of NICE guidance; 2) results from the CAPP2 trial; 3) information comparing risks/benefits of aspirin. The main effects and all interactions on the primary (willingness to prescribe) and secondary outcomes (comfort discussing aspirin) were estimated. RESULTS: There were no statistically significant main effects or interactions of the three information components on willingness to prescribe aspirin or comfort discussing harms and benefits. In total, 80.4% (540/672) of GPs were willing to prescribe, with 19.7% (132/672) unwilling. GPs with prior awareness of aspirin for preventive therapy were more comfortable discussing the medication than those unaware (P = 0.031). CONCLUSION: It is unlikely that providing information on clinical guidance, trial results, and information comparing benefits and harms will increase aspirin prescribing for Lynch syndrome in primary care. Alternative multilevel strategies to support informed prescribing may be warranted
Re-evaluating and recalibrating predictors of bacterial infection in children with cancer and febrile neutropenia
Background
Numerous paediatric febrile neutropenia (FN) clinical decision rules (CDRs) have been derived. Validation studies show reduced performance in external settings. We evaluated the association between variables common across published FN CDRs and bacterial infection and recalibrated existing CDRs using these data.
Methods
Prospective data from the Australian-PICNICC study which enrolled 858 FN episodes in children with cancer were used. Variables shown to be significant predictors of infection or adverse outcome in >1 CDR were analysed using multivariable logistic regression. Recalibration included re-evaluation of beta-coefficients (logistic model) or recursive-partition analysis (tree-based models).
Findings
Twenty-five unique variables were identified across 17 FN CDRs. Fourteen were included in >1 CDR and 10 were analysed in our dataset. On univariate analysis, location, temperature, hypotension, rigors, severely unwell and decreasing platelets, white cell count, neutrophil count and monocyte count were significantly associated with bacterial infection. On multivariable analysis, decreasing platelets, increasing temperature and the appearance of being clinically unwell remained significantly associated. Five rules were recalibrated. Across all rules, recalibration increased the AUC-ROC and low-risk yield as compared to non-recalibrated data. For the SPOG-adverse event CDR, recalibration also increased sensitivity and specificity and external validation showed reproducibility.
Interpretation
Degree of marrow suppression (low platelets), features of inflammation (temperature) and clinical judgement (severely unwell) have been consistently shown to predict infection in children with FN. Recalibration of existing CDRs is a novel way to improve diagnostic performance of CDRs and maintain relevance over time
Risk stratification in children with cancer and febrile neutropenia: A national, prospective, multicentre validation of nine clinical decision rules
© 2019 Background: Reduced intensity treatment of low-risk febrile neutropenia (FN) in children with cancer is safe and improves quality of life. Identifying children with low-risk FN using a validated risk stratification strategy is recommended. This study prospectively validated nine FN clinical decision rules (CDRs) designed to predict infection or adverse outcome. Methods: Data were collected on consecutive FN episodes in this multicentre, prospective validation study. The reproducibility and discriminatory ability of each CDR in the validation cohort was compared to the derivation dataset and details of missed outcomes were reported. Findings: There were 858 FN episodes in 462 patients from eight hospitals included. Bacteraemia occurred in 111 (12·9%) and a non-bacteraemia microbiological documented infection in 185 (21·6%). Eight CDRs exhibited reproducibility and sensitivity ranged from 64% to 96%. Rules that had >85% sensitivity in predicting outcomes classified few patients (<20%) as low risk. For three CDRs predicting a composite outcome of any bacterial or viral infection, the sensitivity and discriminatory ability improved for prediction of bacterial infection alone. Across all CDRs designed to be implemented at FN presentation, the sensitivity improved at day 2 assessment. Interpretation: While reproducibility was observed in eight out of the nine CDRs, no rule perfectly differentiated between children with FN at high or low risk of infection. This is in keeping with other validation studies and highlights the need for additional safeguards against missed infections or adverse outcomes before implementation can be considered
Importance of patient bed pathways and length of stay differences in predicting COVID-19 hospital bed occupancy in England
BACKGROUND: Predicting bed occupancy for hospitalised patients with COVID-19 requires understanding of length of stay (LoS) in particular bed types. LoS can vary depending on the patient's "bed pathway" - the sequence of transfers of individual patients between bed types during a hospital stay. In this study, we characterise these pathways, and their impact on predicted hospital bed occupancy. METHODS: We obtained data from University College Hospital (UCH) and the ISARIC4C COVID-19 Clinical Information Network (CO-CIN) on hospitalised patients with COVID-19 who required care in general ward or critical care (CC) beds to determine possible bed pathways and LoS. We developed a discrete-time model to examine the implications of using either bed pathways or only average LoS by bed type to forecast bed occupancy. We compared model-predicted bed occupancy to publicly available bed occupancy data on COVID-19 in England between March and August 2020. RESULTS: In both the UCH and CO-CIN datasets, 82% of hospitalised patients with COVID-19 only received care in general ward beds. We identified four other bed pathways, present in both datasets: "Ward, CC, Ward", "Ward, CC", "CC" and "CC, Ward". Mean LoS varied by bed type, pathway, and dataset, between 1.78 and 13.53 days. For UCH, we found that using bed pathways improved the accuracy of bed occupancy predictions, while only using an average LoS for each bed type underestimated true bed occupancy. However, using the CO-CIN LoS dataset we were not able to replicate past data on bed occupancy in England, suggesting regional LoS heterogeneities. CONCLUSIONS: We identified five bed pathways, with substantial variation in LoS by bed type, pathway, and geography. This might be caused by local differences in patient characteristics, clinical care strategies, or resource availability, and suggests that national LoS averages may not be appropriate for local forecasts of bed occupancy for COVID-19. TRIAL REGISTRATION: The ISARIC WHO CCP-UK study ISRCTN66726260 was retrospectively registered on 21/04/2020 and designated an Urgent Public Health Research Study by NIHR
Are there gender differences in the geography of alcohol-related mortality in Scotland? An ecological study
<b>Background</b>
There is growing concern about alcohol-related harm, particularly within Scotland which has some of the highest rates of alcohol-related death in western Europe. There are large gender differences in alcohol-related mortality rates in Scotland and in other countries, but the reasons for these differences are not clearly understood. In this paper, we aimed to address calls in the literature for further research on gender differences in the causes, contexts and consequences of alcohol-related harm. Our primary research question was whether the kind of social environment which tends to produce higher or lower rates of alcohol-related mortality is the same for both men and women across Scotland.
<b>Methods</b>
Cross-sectional, ecological design. A comparison was made between spatial variation in men's and women's age-standardised alcohol-related mortality rates in Scotland using maps, Moran's Index, linear regression and spatial analyses of residuals. Directly standardised mortality rates were derived from individual level records of death registration, 2000–2005 (n = 8685).
<b>Results</b>
As expected, men's alcohol-related mortality rate substantially exceeded women's and there was substantial spatial variation in these rates for both men and women within Scotland. However, there was little spatial variation in the relationship between men's and women's alcohol-mortality rates (r2 = 0.73); areas with relatively high rates of alcohol-related mortality for men tended also to have relatively high rates for women. In a small number of areas (8 out of 144) the relationship between men's and women's alcohol-related mortality rates was significantly different.
<b>Conclusion</b>
In as far as geographic location captures exposure to social and economic environment, our results suggest that the relationship between social and economic environment and alcohol-related harm is very similar for men and women. The existence of a small number of areas in which men's and women's alcohol-related mortality had an different relationship suggests that some places may have unusual drinking cultures. These might prove useful for further investigations into the factors which influence drinking behaviour in men and women
Recommended from our members
Evidence-based care of older people with suspected cognitive impairment in general practice: protocol for the IRIS cluster randomised trial
Background: Dementia is a common and complex condition. Evidence-based guidelines for the management of people with dementia in general practice exist; however, detection, diagnosis and disclosure of dementia have been identified as potential evidence-practice gaps. Interventions to implement guidelines into practice have had varying success. The use of theory in designing implementation interventions has been limited, but is advocated because of its potential to yield more effective interventions and aid understanding of factors modifying the magnitude of intervention effects across trials. This protocol describes methods of a randomised trial that tests a theory-informed implementation intervention that, if effective, may provide benefits for patients with dementia and their carers.
Aims: This trial aims to estimate the effectiveness of a theory-informed intervention to increase GPs’ (in Victoria, Australia) adherence to a clinical guideline for the detection, diagnosis, and management of dementia in general practice, compared with providing GPs with a printed copy of the guideline. Primary objectives include testing if the intervention is effective in increasing the percentage of patients with suspected cognitive impairment who receive care consistent with two key guideline recommendations: receipt of a i) formal cognitive assessment, and ii) depression assessment using a validated scale (primary outcomes for the trial).
Methods: The design is a parallel cluster randomised trial, with clusters being general practices. We aim to recruit 60 practices per group. Practices will be randomised to the intervention and control groups using restricted randomisation. Patients meeting the inclusion criteria, and GPs’ detection and diagnosis behaviours directed toward these patients, will be identified and measured via an electronic search of the medical records nine months after the start of the intervention. Practitioners in the control group will receive a printed copy of the guideline. In addition to receipt of the printed guideline, practitioners in the intervention group will be invited to participate in an interactive, opinion leader-led, educational face-to-face workshop. The theory-informed intervention aims to address identified barriers to and enablers of implementation of recommendations. Researchers responsible for identifying the cohort of patients with suspected cognitive impairment, and their detection and diagnosis outcomes, will be blind to group allocation.
Trial registration: Australian New Zealand Clinical Trials Registry: ACTRN12611001032943 (date registered 28 September, 2011)
Eliciting a predatory response in the eastern corn snake (Pantherophis guttatus) using live and inanimate sensory stimuli: implications for managing invasive populations
North America's Eastern corn snake (Pantherophis guttatus) has been introduced to several islands throughout the Caribbean and Australasia where it poses a significant threat to native wildlife. Invasive snake control programs often involve trapping with live bait, a practice that, as well as being costly and labour intensive, raises welfare and ethical concerns. This study assessed corn snake response to live and inanimate sensory stimuli in an attempt to inform possible future trapping of the species and the development of alternative trap lures. We exposed nine individuals to sensory cues in the form of odour, visual, vibration and combined stimuli and measured the response (rate of tongue-flick [RTF]). RTF was significantly higher in odour and combined cues treatments, and there was no significant difference in RTF between live and inanimate cues during odour treatments. Our findings suggest chemical cues are of primary importance in initiating predation and that an inanimate odour stimulus, absent of simultaneous visual and vibratory cues, is a potential low-cost alternative trap lure for the control of invasive corn snake populations
Implementation and evaluation of a multisite drug usage evaluation program across Australian hospitals - a quality improvement initiative
Background: With the use of medicines being a broad and extensive part of health management, mechanisms to ensure quality use of medicines are essential. Drug usage evaluation (DUE) is an evidence-based quality improvement methodology, designed to improve the quality, safety and cost-effectiveness of drug use. The purpose of this paper is to describe a national DUE methodology used to improve health care delivery across the continuum through multi-faceted intervention involving audit and feedback, academic detailing and system change, and a qualitative assessment of the methodology, as illustrated by the Acute Postoperative Pain Management (APOP) project. Methods. An established methodology, consisting of a baseline audit of inpatient medical records, structured patient interviews and general practitioner surveys, followed by an educational intervention and follow-up audit, is used. Australian hospitals, including private, public, metropolitan and regional, are invited to participate on a voluntary basis. De-identified data collected by hospitals are collated and evaluated nationally to provide descriptive comparative analyses. Hospitals benchmark their practices against state and national results to facilitate change. The educational intervention consists of academic detailing, group education, audit and feedback, point-of-prescribing prompts and system changes. A repeat data collection is undertaken to assess changes in practice. An online qualitative survey was undertaken to evaluate the APOP program. Qualitative assessment of hospitals' perceptions of the effectiveness of the overall DUE methodology and changes in procedure/prescribing/policy/clinical practice which resulted from participation were elicited. Results: 62 hospitals participated in the APOP project. Among 23 respondents to the evaluation survey, 18 (78%) reported improvements in the documentation of pain scores at their hospital. 15 (65%) strongly agreed or agreed that participation in APOP directly resulted in increased prescribing of multimodal analgesia for pain relief in postoperative patients. Conclusions: This national DUE program has facilitated the engagement and participation of a number of acute health care facilities to address issues relating to quality use of medicine. This approach has been perceived to be effective in helping them achieve improvements in patient care
Treatment effect of idebenone on inspiratory function in patients with Duchenne muscular dystrophy
Assessment of dynamic inspiratory function may provide valuable information about the degree and progression of pulmonary involvement in patients with Duchenne muscular dystrophy (DMD). The aims of this study were to characterize inspiratory function and to assess the efficacy of idebenone on this pulmonary function outcome in a large and well‐characterized cohort of 10–18 year‐old DMD patients not taking glucocorticoid steroids (GCs) enrolled in the phase 3 randomized controlled DELOS trial. We evaluated the effect of idebenone on the highest flow generated during an inspiratory FVC maneuver (maximum inspiratory flow; V'I,max(FVC)) and the ratio between the largest inspiratory flow during tidal breathing (tidal inspiratory flow; V'I,max(t)) and the V'I,max(FVC). The fraction of the maximum flow that is not used during tidal breathing has been termed inspiratory flow reserve (IFR). DMD patients in both treatment groups of DELOS (idebenone, n = 31; placebo: n = 33) had comparable and abnormally low V'I,max(FVC) at baseline. During the study period, V'I,max(FVC) further declined by −0.29 L/sec in patients on placebo (95%CI: −0.51, −0.08; P = 0.008 at week 52), whereas it remained stable in patients on idebenone (change from baseline to week 52: 0.01 L/sec; 95%CI: −0.22, 0.24; P = 0.950). The between‐group difference favoring idebenone was 0.27 L/sec (P = 0.043) at week 26 and 0.30 L/sec (P = 0.061) at week 52. In addition, during the study period, IFR improved by 2.8% in patients receiving idebenone and worsened by −3.0% among patients on placebo (between‐group difference 5.8% at week 52; P = 0.040). Although the clinical interpretation of these data is currently limited due to the scarcity of routine clinical practice experience with dynamic inspiratory function outcomes in DMD, these findings from a randomized controlled study nevertheless suggest that idebenone preserved inspiratory muscle function as assessed by V'I,max(FVC) and IFR in patients with DMD
- …