60 research outputs found
Toward a Framework for Outcome-Based Analytical Performance Specifications: A Methodology Review of Indirect Methods for Evaluating the Impact of Measurement Uncertainty on Clinical Outcomes
Background: For medical tests that have a central role in clinical decision-making, current guidelines advocate outcome-based analytical performance specifications. Given that empirical (clinical-trial style) analyses are often impractical or unfeasible in this context, the ability to set such specifications is expected to rely on indirect studies to calculate the impact of test measurement uncertainty on downstream clinical, operational and economic outcomes. Currently however, a lack of awareness and guidance concerning available alternative indirect methods is limiting the production of outcome-based specifications. Our aim therefore was to review available indirect methods and present an analytical framework to inform future outcome-based performance goals.
Content: A methodology review consisting of database searches and extensive citation tracking was conducted to identify studies using indirect methods to incorporate or evaluate the impact of test measurement uncertainty on downstream outcomes (including clinical accuracy, clinical utility and/or costs). Eighty-two studies were identified, most of which evaluated the impact of imprecision and/or bias on clinical accuracy. A common analytical framework underpinning the various methods was identified, consisting of three key steps: (1) calculation of “true” test values; (2) calculation of measured test values (incorporating uncertainty); and (3) calculation of the impact of discrepancies between (1) and (2) on specified outcomes. A summary of the methods adopted is provided, and key considerations discussed.
Conclusions: Various approaches are available for conducting indirect assessments to inform outcome-based performance specifications. This study provides an overview of methods and key considerations to inform future studies and research in this area
Evidence synthesis and linkage for modelling the cost-effectiveness of diagnostic tests : preliminary good practice recommendations
Objectives: To develop preliminary good practice recommendations for synthesising and linking evidence of treatment effectiveness when modelling the cost-effectiveness of diagnostic tests. Methods: We conducted a targeted review of guidance from key Health Technology Assessment (HTA) bodies to summarise current recommendations on synthesis and linkage of treatment effectiveness evidence within economic evaluations of diagnostic tests. We then focused on a specific case study, the cost-effectiveness of troponin for the diagnosis of myocardial infarction, and reviewed the approach taken to synthesise and link treatment effectiveness evidence in different modelling studies. Results: The Australian and UK HTA bodies provided advice for synthesising and linking treatment effectiveness in diagnostic models, acknowledging that linking test results to treatment options and their outcomes is common. Across all reviewed models for the case study, uniform test-directed treatment decision making was assumed, i.e., all those who tested positive were treated. Treatment outcome data from a variety of sources, including expert opinion, were utilised for linked clinical outcomes. Preliminary good practice recommendations for data identification, integration and description are proposed. Conclusion: Modelling the cost-effectiveness of diagnostic tests poses unique challenges in linking evidence on test accuracy to treatment effectiveness data to understand how a test impacts patient outcomes and costs. Upfront consideration of how a test and its results will likely be incorporated into patient diagnostic pathways is key to exploring the optimal design of such models. We propose some preliminary good practice recommendations to improve the quality of cost-effectiveness evaluations of diagnostics tests going forward
Unmet clinical needs for COVID-19 tests in UK health and social care settings
There is an urgent requirement to identify which clinical settings are in most need of COVID-19 tests and the priority role(s) for tests in these settings to accelerate the development of tests fit for purpose in health and social care across the UK. This study sought to identify and prioritize unmet clinical needs for COVID-19 tests across different settings within the UK health and social care sector via an online survey of health and social care professionals and policymakers. Four hundred and forty-seven responses were received between 22nd May and 15th June 2020. Hospitals and care homes were recognized as the settings with the greatest unmet clinical need for COVID-19 diagnostics, despite reporting more access to laboratory molecular testing than other settings. Hospital staff identified a need for diagnostic tests for symptomatic workers and patients. In contrast, care home staff expressed an urgency for screening at the front door to protect high-risk residents and limit transmission. The length of time to test result was considered a widespread problem with current testing across all settings. Rapid tests for staff were regarded as an area of need across general practice and dental settings alongside tests to limit antibiotics use
Variation in hospital cost trajectories at the end of life by age, multimorbidity and cancer type
Background
Approximately thirty thousand people in Scotland are diagnosed with cancer annually, of whom a third live less than one year. The timing, nature and value of hospital-based healthcare for patients with advanced cancer are not well understood. The study's aim was to describe the timing and nature of hospital-based healthcare use and associated costs in the last year of life for patients with a cancer diagnosis.
Methods
We undertook a Scottish population-wide administrative data linkage study of hospital-based healthcare use for individuals with a cancer diagnosis, who died aged 60 and over between 2012 and 2017. Hospital admissions and length of stay (LOS), as well as the number and nature of outpatient and day case appointments were analysed. Generalised linear models were used to adjust costs for age, gender, socioeconomic deprivation status, rural-urban (RU) status and comorbidity.
Results
The study included 85,732 decedents with a cancer diagnosis. For 64,553 (75.3%) of them, cancer was the primary cause of death. Mean age at death was 80.01 (SD 8.15) years. The mean number of inpatient stays in the last year of life was 5.88 (SD 5.68), with a mean LOS of 7 days. Admission rates rose sharply in the last month of life. One year adjusted and unadjusted costs decreased with increasing age. A higher comorbidity burden was associated with higher costs. Major cost differences were present between cancer types.
Conclusions
People in Scotland in their last year of life with cancer are high users of secondary care. Hospitalisation accounts for a high proportion of costs, particularly in the last month of life. Further research is needed to examine triggers for hospitalisations and to identify influenceable reasons for unwarranted variation in hospital use among different cancer cohorts
Software using artificial intelligence for nodule and cancer detection in CT lung cancer screening: systematic review of test accuracy studies:artificial intelligence and nodule and lung cancer
Objectives: To examine the accuracy and impact of artificial intelligence (AI) software assistance in lung cancer screening using computed tomography (CT).Methods: A systematic review of CE-marked, AI-based software for automated detection and analysis of nodules in CT lung cancer screening was conducted. Multiple databases including Medline, Embase and Cochrane CENTRAL were searched from 2012 to March 2023. Primary research reporting test accuracy or impact on reading time or clinical management was included. QUADAS2/QUADAS-C were used to assess risk of bias. We undertook narrative synthesis.Results: Eleven studies evaluating six different AI-based programs and reporting on 19,770 patients were eligible. All were at high risk of bias with multiple applicability concerns. Compared to unaided reading, AI-assisted reading was faster and generally improved sensitivity (+5% to +20% for detecting/categorising actionable nodules; +3% to +15% for detecting/categorising malignant nodules), with lower specificity (-7% to -3% for detecting/categorising actionable nodules; -8% to -2% for detecting/categorising malignant nodules). AI assistance tended to increase the proportion of nodules allocated to higher risk categories. Assuming 0.5% cancer prevalence, these results would translate into additional 150 to 750 cancers detected per million participants but lead to an additional 59,700 to 79,600 participants without cancer receiving unnecessary CT surveillance.Conclusions: AI assistance in lung cancer screening may improve sensitivity but increases the number of false positive results and unnecessary surveillance. Future research needs to increase the specificity of AI-assisted reading and minimise risk of bias and applicability concerns through improved study design.<br/
Target Product Profile for a Machine Learning–Automated Retinal Imaging Analysis Software for Use in English Diabetic Eye Screening: Protocol for a Mixed Methods Study
Background:
Diabetic eye screening (DES) represents a significant opportunity for the application of machine learning (ML) technologies, which may improve clinical and service outcomes. However, successful integration of ML into DES requires careful product development, evaluation, and implementation. Target product profiles (TPPs) summarize the requirements necessary for successful implementation so these can guide product development and evaluation.
//
Objective:
This study aims to produce a TPP for an ML-automated retinal imaging analysis software (ML-ARIAS) system for use in DES in England.
//
Methods:
This work will consist of 3 phases. Phase 1 will establish the characteristics to be addressed in the TPP. A list of candidate characteristics will be generated from the following sources: an overview of systematic reviews of diagnostic test TPPs; a systematic review of digital health TPPs; and the National Institute for Health and Care Excellence’s Evidence Standards Framework for Digital Health Technologies. The list of characteristics will be refined and validated by a study advisory group (SAG) made up of representatives from key stakeholders in DES. This includes people with diabetes; health care professionals; health care managers and leaders; and regulators and policy makers. In phase 2, specifications for these characteristics will be drafted following a series of semistructured interviews with participants from these stakeholder groups. Data collected from these interviews will be analyzed using the shortlist of characteristics as a framework, after which specifications will be drafted to create a draft TPP. Following approval by the SAG, in phase 3, the draft will enter an internet-based Delphi consensus study with participants sought from the groups previously identified, as well as ML-ARIAS developers, to ensure feasibility. Participants will be invited to score characteristic and specification pairs on a scale from “definitely exclude” to “definitely include,” and suggest edits. The document will be iterated between rounds based on participants’ feedback. Feedback on the draft document will be sought from a group of ML-ARIAS developers before its final contents are agreed upon in an in-person consensus meeting. At this meeting, representatives from the stakeholder groups previously identified (minus ML-ARIAS developers, to avoid bias) will be presented with the Delphi results and feedback of the user group and asked to agree on the final contents by vote.
//
Results:
Phase 1 was completed in November 2023. Phase 2 is underway and expected to finish in March 2024. Phase 3 is expected to be complete in July 2024.
//
Conclusions:
The multistakeholder development of a TPP for an ML-ARIAS for use in DES in England will help developers produce tools that serve the needs of patients, health care providers, and their staff. The TPP development process will also provide methods and a template to produce similar documents in other disease areas.
//
International Registered Report Identifier (IRRID):
DERR1-10.2196/5056
Measuring spirometry in a lung cancer screening cohort highlights possible underdiagnosis and misdiagnosis of Chronic Obstructive Pulmonary Disease
Introduction:
Chronic Obstructive Pulmonary Disease (COPD) is underdiagnosed, and measurement of spirometry
alongside low-dose computed tomography (LDCT) screening for lung cancer is one strategy to
increase earlier diagnosis of this disease. //
Methods:
Ever-smokers at high risk of lung cancer were invited to the Yorkshire Lung Screening Trial for a Lung
Health Check (LHC) comprising LDCT screening, pre-bronchodilator spirometry and smoking
cessation service. In this cross-sectional study we present data on participant demographics,
respiratory symptoms, lung function, emphysema on imaging and both self-reported and primary
care diagnoses of COPD. Multivariable logistic regression analysis identified factors associated with
possible underdiagnosis and misdiagnosis of COPD in this population, with airflow obstruction (AO)
defined as FEV1/FVC ratio <0.70. //
Results:
Of 3,920 LHC attendees undergoing spirometry, 17% had undiagnosed AO with respiratory
symptoms, representing potentially undiagnosed COPD. Compared to those with a primary care
COPD code, this population had milder symptoms, better lung function, and were more likely to be
current smokers (p≤0.001 for all comparisons). Of 836 attendees with a primary care COPD code
who underwent spirometry, 19% did not have AO, potentially representing misdiagnosed COPD,
although symptom burden was high. //
Discussion:
Spirometry offered alongside LDCT screening can potentially identify cases of undiagnosed and
misdiagnosed COPD. Future research should assess the downstream impact of these findings to
determine if any meaningful changes to treatment and outcomes occurs, and also to assess the
impact on co-delivering spirometry on other parameters of LDCT screening performance such as
participation and adherence. Additionally, work is needed to better understand the aetiology of
respiratory symptoms in those with misdiagnosed COPD, to ensure this highly symptomatic group
receive evidence-based interventions
Six versus 12 months' adjuvant trastuzumab in patients with HER2-positive early breast cancer: the PERSEPHONE non-inferiority RCT
Background
The addition of adjuvant trastuzumab to chemotherapy has significantly improved outcomes in human epidermal growth factor receptor 2 (HER2) positive early, potentially curable breast cancer. Twelve months’ trastuzumab tested in the registration trials was adopted for standard adjuvant treatment in 2006. Subsequently similar outcomes were demonstrated using 9 weeks trastuzumab. Shorter durations were therefore tested for non-inferiority.
Objectives
To establish whether 6 months’ adjuvant trastuzumab is non-inferior to 12 months in HER2-positive early breast cancer using a primary endpoint of 4-year disease-free-survival (DFS).
Design
Phase III randomised, controlled, non-inferiority trial.
Setting
152 NHS Hospitals.
Participants
4088 patients with HER2-positive early breast cancer planned to receive both chemotherapy and trastuzumab.
Intervention
Randomisation (1:1) between six months’ or twelve months’ trastuzumab.
Main outcomes
Primary endpoint was DFS four years after diagnosis. Secondary endpoints were overall survival (OS), cost effectiveness, and cardiac function during trastuzumab. Assuming a 4-year DFS rate of 80% with 12 months, 4000 patients were required to demonstrate non-inferiority of 6-months (5% 1-sided significance, 85% power), defining the non-inferiority limit as no worse than 3% below the standard arm. Costs and quality-adjusted life years (QALYs) were estimated by within-trial analysis and a lifetime decision-analytic model.
Results
Between 4th October 2007 and 31st July 2015, 2045 patients were randomised to 12-months’ trastuzumab and 2043 to 6-months. Sixty-nine percent had ER-positive disease; 90% received anthracyclines (49% with taxanes; 41% without taxanes); 10% received taxanes without anthracyclines; 54% had trastuzumab sequentially after chemotherapy; 85% received adjuvant chemotherapy (58% were node negative). At 6.1 years median follow-up with 389 (10%) deaths, and 566 (14%) DFS events, 4-year DFS rates for the 4088 patients were 89.5% (95% CI, 88.1-90.8) in the 6-month group and 90.3% (95% CI 88.9- 91.5) in the 12-month group (Hazard Ratio 1.10; 90% CI 0.96–1.26, non-inferiority p=0.01), demonstrating non-inferiority of 6-months’ trastuzumab. Congruent results were found for OS (non-inferiority p=0.0003), and landmark analyses 6 months from starting trastuzumab (non-inferiority p=0.03 (DFS) and p=0.006 (OS)). 6-months’ trastuzumab resulted in fewer patients reporting adverse events of severe grade (365/1929 (19%) versus 460/1935 (24%) 12-month patients, p=0.0003) or stopping early because of cardiotoxicity (61/1977 (3%) versus 146/1941 (8%) 12-month patients, p<0.0001). Health economic analysis showed significantly lower lifetime costs and similar lifetime QALYs, and thus a high probability that 6 months is cost-effective compared to 12 months. Patient reported experiences on the trial highlighted fatigue, and aches and pains most frequently.
Limitations
The type of chemotherapy and timing of trastuzumab changed through the recruitment phase of the study as standard practice altered.
Conclusions
PERSEPHONE demonstrated that in HER2-positive early breast cancer 6 months’ adjuvant trastuzumab was non-inferior to 12 months. There was significantly less cardiac toxicity and fewer severe adverse events with 6 months’ treatment.
Future work
On-going translational work investigates patient and tumour genetic determinants of toxicity, and trastuzumab efficacy. An individual patient data meta-analysis with PHARE and other trastuzumab duration trials is planned.
Trial registration
ISRCTN 52968807
Funding
National Institute for Health Research, Health Technology Assessment Programme (HTA Project: 06/303/98).National Institute for Health Research, Health Technology Assessment Programme (HTA Project: 06/303/98)
Pre-hospital risk factors for inpatient death from severe febrile illness in Malian children.
BACKGROUND: Inpatient case fatality from severe malaria remains high in much of sub-Saharan Africa. The majority of these deaths occur within 24 hours of admission, suggesting that pre-hospital management may have an impact on the risk of case fatality. METHODS: Prospective cohort study, including questionnaire about pre-hospital treatment, of all 437 patients admitted with severe febrile illness (presumed to be severe malaria) to the paediatric ward in Sikasso Regional Hospital, Mali, in a two-month period. FINDINGS: The case fatality rate was 17.4%. Coma, hypoglycaemia and respiratory distress at admission were associated with significantly higher mortality. In multiple logistic regression models and in a survival analysis to examine pre-admission risk factors for case fatality, the only consistent and significant risk factor was sex. Girls were twice as likely to die as boys (AOR 2.00, 95% CI 1.08-3.70). There was a wide variety of pre-hospital treatments used, both modern and traditional. None had a consistent impact on the risk of death across different analyses. Reported use of traditional treatments was not associated with post-admission outcome. INTERPRETATION: Aside from well-recognised markers of severity, the main risk factor for death in this study was female sex, but this study cannot determine the reason why. Differences in pre-hospital treatments were not associated with case fatality
- …