37 research outputs found
The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study
AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease
Implementing new health interventions in developing countries: why do we lose a decade or more?
BACKGROUND: It is unclear how long it takes for health interventions to transition from research and development (R&D) to being used against diseases prevalent in resource-poor countries. We undertook an analysis of the time required to begin implementation of four vaccines and three malaria interventions. We evaluated five milestones for each intervention, and assessed if the milestones were associated with beginning implementation. METHODS: The authors screened WHO databases to determine the number of years between first regulatory approval of interventions, and countries beginning implementation. Descriptive analyses of temporal patterns and statistical analyses using logistic regression and Cox proportional hazard models were used to evaluate associations between five milestones and the beginning of implementation for each intervention. The milestones were: (A) presence of a coordinating group focused on the intervention; (B) availability of an intervention tailored to developing country health systems; (C) international financing commitment, and; (D) initial and (E) comprehensive WHO recommendations. Countries were categorized by World Bank income criteria. RESULTS: Five years after regulatory approval, no low-income countries (LICs) had begun implementing any of the vaccines, increasing to an average of only 4% of LICs after 10 years. Each malaria intervention was used by an average of 7% of LICs after five years and 37% after 10 years. Four of the interventions had similar implementation rates to HepB, while one was slower and one was faster than HepB. A financing commitment and initial WHO recommendation appeared to be temporally associated with the beginning of implementation. The initial recommendation from WHO was the only milestone associated in all statistical analyses with countries beginning implementation (relative rate = 1.97, P > 0.001). CONCLUSIONS: Although possible that four milestones were not associated with countries beginning implementation, we propose an alternative interpretation; that the milestones were not realized early enough in each intervention's development to shorten the time to beginning implementation. We discuss a framework built upon existing literature for consideration during the development of future interventions. Identifying critical milestones and their timing relative to R&D, promises to help new interventions realize their intended public health impact more rapidly
Redefining the stress cortisol response to surgery.
BACKGROUND: Cortisol levels rise with the physiological stress of surgery. Previous studies have used older, less-specific assays, have not differentiated by severity or only studied procedures of a defined type. The aim of this study was to examine this phenomenon in surgeries of varying severity using a widely used cortisol immunoassay. METHODS: Euadrenal patients undergoing elective surgery were enrolled prospectively. Serum samples were taken at 8 am on surgical day, induction and 1 hour, 2 hour, 4 hour and 8 hour after. Subsequent samples were taken daily at 8 am until postoperative day 5 or hospital discharge. Total cortisol was measured using an Abbott Architect immunoassay, and cortisol-binding globulin (CBG) using a radioimmunoassay. Surgical severity was classified by POSSUM operative severity score. RESULTS: Ninety-three patients underwent surgery: Major/Major+ (n = 37), Moderate (n = 33) and Minor (n = 23). Peak cortisol positively correlated to severity: Major/Major+ median 680 [range 375-1452], Moderate 581 [270-1009] and Minor 574 [272-1066] nmol/L (Kruskal-Wallis test, P = .0031). CBG fell by 23%; the magnitude of the drop positively correlated to severity. CONCLUSIONS: The range in baseline and peak cortisol response to surgery is wide, and peak cortisol levels are lower than previously appreciated. Improvements in surgery, anaesthetic techniques and cortisol assays might explain our observed lower peak cortisols. The criteria for the dynamic testing of cortisol response may need to be reduced to take account of these factors. Our data also support a lower-dose, stratified approach to dosing of steroid replacement in hypoadrenal patients, to minimize the deleterious effects of over-replacement