65 research outputs found
Biomarkers of liver fibrosis
Currently the only accepted method (gold standard) for the diagnosis of the fibrotic stages of chronic liver disease (CLD) is liver biopsy, to allow histological assessment. Liver biopsy is an invasive investigation associated with a range adverse events (e.g. pain, haemorrhage) limiting its serial usage in clinical practice. Additionally, its use is further reduced by sampling error and because histology is in effect a surrogate for clinical outcomes.
Over recent years, alternative non-invasive biomarkers for the diagnosis of liver fibrosis have been developed. Initially developed in chronic viral hepatitis these have since seen their use expanded to include all aetiologies of CLD. Such markers can be divided into indirect ‘simple’ markers (e.g. transaminases, gamma-glutamyl transferase, platelet count), direct ‘complex’ markers (e.g. procollagen peptides I/III, Type IV collagen), cytokines (e.g. interleukin-10, transforming growth factor alpha) and imaging. Here, we discuss the clinical utility, limitations and development of non-invasive biomarkers in their use as diagnostic and prognostic tests
Health related quality of life in individuals at high risk of chronic liver disease: Impact of a community diagnostic pathway
Objectives: There is a lack of understanding of health related quality of life (HRQoL) in chronic liver disease (CLD). With the rising prevalence of alcohol and obesity driven CLD, and the increasing ability to screen for fibrosis, it is important to understand the impact of the diagnostic process for patients.Study design: Prospective cohort studyMethods: A cohort study conducted utilising the Nottingham Adult Liver Disease Stratification Pathway, UK. All patients referred as high risk for CLD (due to metabolic, alcohol or abnormal liver enzymes) completed the EQ-5D before diagnosis and at three and 12 months after. HRQoL was investigated by domain, CLD severity (transient elastography) and temporally.Results: 493 patients participated with 300 (60.9%) completing at least one follow-up HRQoL assessment.Pre-diagnosis the median (IQR) utility index was 0.75 (0.61-0.85) and visual analogue scale was 75/100 (60-90). The median utility index was significantly lower amongst those with advanced liver disease compared to those without at all time points (baseline 0.68 vs 0.77, three-months 0.65 vs 0.79, 12-months 0.69 vs 0.84, all
Reliable computational quantification of liver fibrosis is compromised by inherent staining variation
Biopsy remains the gold standard measure for staging liver disease, both to inform prognosis and to assess the response to a given treatment. Semiquantitative scores such as the Ishak fibrosis score are used for evaluation. These scores are utilised in clinical trials, with the US Food and Drug Administration mandating particular scores as inclusion criteria for participants and using the change in score as evidence of treatment efficacy. There is an urgent need for improved, quantitative assessment of liver biopsies to detect small incremental changes in liver architecture over the course of a clinical trial. Artificial intelligence (AI) methods have been proposed as a way to increase the amount of information extracted from a biopsy and to potentially remove bias introduced by manual scoring. We have trained and evaluated an AI tool for measuring the amount of scarring in sections of picrosirius red-stained liver. The AI methodology was compared with both manual scoring and widely available colour space thresholding. Four sequential sections from each case were stained on two separate occasions by two independent clinical laboratories using routine protocols to study the effect of inter- and intra-laboratory staining variation on these tools. Finally, we compared these methods to second harmonic generation (SHG) imaging, a stain-free quantitative measure of collagen. Although AI methods provided a modest improvement over simpler computer-assisted measures, staining variation both within and between labs had a dramatic effect on quantitation, with manual assignment of scar proportion the most consistent. Manual assessment also correlated the most strongly with collagen measured by SHG. In conclusion, results suggest that computational measures of liver scarring from stained sections are compromised by inter- and intra-laboratory staining. Stain-free quantitative measurement using SHG avoids staining-related variation and may prove more accurate in detecting small changes in scarring that may occur in therapeutic trials
The XL probe: a luxury or a necessity? risk stratification in an obese community cohort using transient elastography
Background: Transient elastography is a non-invasive tool which can stratify patients at risk of chronic liver disease. However, a raised body mass index has been independently associated with a failed or unreliable examination.Objective: The purpose of this study was to analyse the performance of two probes (M/XL) on a portable transient elastography device within an obese community population.Method: The method involved a prospective study with recruitment from a primary care practice. Patients identified with a risk factor for chronic liver disease were invited to a community-based risk stratification pathway for transient elastography readings with both probes. A threshold of ≥8.0 kPa defined elevated liver stiffness.Results: A total of 477 patients attended the pathway. Of the patients, 21% had no valid measurements with the M probe. There was a significant difference between the probes in the proportion achieving ≥10 valid readings (M versus XL probe: 66.2% versus 90.2%; p ≤ 0.001) and in their reliability (M versus XL probe: 77.4% versus 98.5%; p = 0.028). Unreliable readings with the M probe increased as the body mass index increased. The XL probe re-stratified 5.2% of patients to have a normal reading.Conclusion: The XL probe on a portable device significantly improves the applicability of transient elastography within a community-based risk stratification pathway
MRI assessment of altered dynamic changes in liver haemodynamics following a meal challenge in compensated cirrhosis
Background: To use magnetic resonance imaging (MRI) to dynamically assess postprandial changes in hepatic and collateral blood flow, liver perfusion and oxygenation in healthy participants and patients with liver disease with compensated cirrhosis (CC).Methods: We evaluated blood flow in the portal vein, hepatic artery, and azygos vein (using phase-contrast MRI), liver perfusion (using arterial spin labelling) and blood oxygenation (using transverse relaxation time [T2*] mapping). Measures were collected at baseline and at 6-7 min intervals from 20 to 65 min following a test meal (440 ml; 660 kcal) in 10 healthy participants and 10 CC patients.Results: In healthy participants, we observed a significant postprandial increase in portal vein flow from baseline (+137 ± 26% [mean ± standard deviation], p < 0.001) coupled with a reduction in hepatic artery flow from baseline (-30 ±18%, p = 0.008), reflecting the hepatic artery buffer response. In CC patients, a lower but still significant increase in portal vein (67 ±50 %, p = 0.014) was observed, without a clear hepatic artery buffer response. Healthy participants showed a significant increase in postprandial liver perfusion (138 ± 75%, p < 0.001), not observed in CC patients. There was no change in liver T2* for either group. Conclusions: Postprandial changes in liver perfusion, oxygenation, and hepatic and collateral circulation can be measured non-invasively using MRI. Differences between healthy participants and CC patients were shown, which may help stratify liver cirrhosis in patients
Economic evaluation of a community-based diagnostic pathway to stratify adults for non-alcoholic fatty liver disease: a Markov model informed by a feasibility study
Objectives: To assess the long-term cost-effectiveness of a risk stratification pathway, compared with standard care, for detecting non-alcoholic fatty liver disease (NAFLD) in primary care.
Setting: Primary care general practices in England.
Participants: Adults who have been identified in primary care to have a risk factor for developing NAFLD, that is, type 2 diabetes without a history of excessive alcohol use.
Intervention: A community-based pathway, which utilises transient elastography and hepatologists to stratify patients at risk of NAFLD, has been implemented and demonstrated to be feasible (NCT02037867). Earlier identification could mean earlier treatments, referral to specialist, and enrolment into surveillance programmes.
Design: The impact of earlier detection and treatment with the risk stratification pathway on progression to later stages of liver disease was examined using decision modelling with Markov chains to estimate lifetime health and economic effects of the two comparators.
Data sources: Data from a prospective cross-sectional feasibility study indicating risk stratification pathway and standard care diagnostic accuracies, were combined with a Markov model that comprised the following states: no/mild liver disease, significant liver disease, compensated cirrhosis; decompensated cirrhosis, hepatocellular carcinoma, liver transplant and death. The model data were chosen from up-to-date UK sources, published literature and an expert panel.
Outcome measure: An incremental cost-effectiveness ratio (ICER) indicating cost per quality adjusted life year (QALY) of the risk stratification pathway compared with standard care was estimated.
Results: The risk stratification pathway was more effective than standard care, and cost ï¿¡2,138 per QALY gained. The ICER was most sensitive to estimates of the rate of fibrosis progression and the effect of treatment on reducing this, and ranged from -ï¿¡1,895 to ï¿¡7,032/QALY. The risk stratification pathway demonstrated an 85% probability of cost-effectiveness at the UK willingness-to-pay threshold of ï¿¡20,000/QALY.
Conclusions: Implementation of a community-based risk stratification pathway is likely to be cost effective
Acceptability of chronic liver disease screening in a UK primary care setting: a qualitative evaluation
© Author(s) (or their employer(s)) 2020. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. OBJECTIVES: The increasing incidence of chronic liver disease (CLD) in the UK may be attributed to a rise in preventable risk factors, including hazardous alcohol use and type 2 diabetes. Transient elastography (TE) can rapidly stratify risk of CLD in primary care populations and provide an opportunity to raise patient awareness of risk factors.This study explores patients' experiences of TE screening in a primary care setting. In addition, patient awareness of CLD risk is explored. STUDY DESIGN AND SETTING: This study used a qualitative process evaluation of a community screening pathway for CLD (Nottingham, UK). Participants completed semistructured interviews, which were audio-recorded, transcribed verbatim and analysed thematically. PARTICIPANTS: Twenty adults were purposively recruited 6 months to 2 years after TE screening. Inclusion criteria included (1) hazardous alcohol use, (2) type 2 diabetes and/or (3) persistently elevated liver enzymes without known cause. RESULTS: Undergoing TE in primary care was seen as acceptable to most participants. Hazardous alcohol use was identified as the primary cause of CLD; no participants were aware of metabolic risk factors. TE improved understanding of personal risk factors and prompted contemplation of lifestyle changes across all TE stratifications. However, participants' perceptions of risk were altered by the healthcare providers' communication of TE scores. CONCLUSIONS: High acceptability of TE, regardless of the risk factor, provides strong support for inclusion of TE stratification in primary care. Findings highlight the positive impact of receiving TE on risk awareness. Future clinical iterations should improve the structure and communication of TE results to patients
Validation of a Model for Identification of Patients With Compensated Cirrhosis at High Risk of Decompensation
Background & Aims: It is important to rapidly identify patients with advanced liver disease. Routine tests to assess liver function and fibrosis provide data that can be used to determine patients’ prognoses. We tested the validated the ability of combined data from the ALBI and FIB-4 scoring systems to identify patients with compensated cirrhosis at highest risk for decompensation.Methods: We collected data from 145 patients with compensated cirrhosis (91% Child A cirrhosis and median MELD scores below 8) from a cohort in Nottingham, United Kingdom, followed for a median 4.59 years (development cohort). We collected baseline clinical features and recorded decompensation events. We used these data to develop a model based on liver function (assessed by the ALBI score) and extent of fibrosis (assessed by the FIB-4 index) to determine risk of decompensation. We validated the model in 2 independent external cohorts (1 in Dublin, Ireland and 1 in Menoufia, Egypt) comprising 234 patients.Results: In the development cohort, 19.3% of the patients developed decompensated cirrhosis. Using a combination of ALBI and FIB-4 scores, we developed a model that identified patients at low vs high risk of decompensation (hazard ratio [HR] for decompensation in patients with high risk score was 7.10). When we tested the scoring system in the validation cohorts, the HR for decompensation in patients with a high-risk score was 12.54 in the Ireland cohort and 5.10 in the Egypt cohort.Conclusion: We developed scoring system, based on a combination of ALBI and FIB-4 scores, that identifies patients at risk for liver decompensation. We validated the scoring system in 2 independent international cohorts (Europe and the Middle East), so it appears to apply to diverse populations
Development and implementation of a commissioned pathway for the identification and stratification of liver disease in the community
© Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY. Published by BMJ. Objective: To describe the development of the Nottingham liver disease stratification pathway, present a 12-month evaluation of uptake and stratification results, and compare the pathway with current British Society of Gastroenterology (BSG) guidelines. Design: A referral pathway between primary and secondary care for the detection and risk stratification of liver disease. Setting: Four Nottinghamshire clinical commissioning groups (700 000 population). Patients: Patients are referred to the pathway with (1) raised aspartate aminotransferase to alanine aminotransferase (AST:ALT) ratio, (2) harmful alcohol use or (3) risk or presence of non-alcoholic fatty liver disease (NAFLD). Interventions: We report on clinic attendance within secondary care for transient elastography (TE) and brief lifestyle intervention. The TE result is reported back to the general practitioner with advice on interpretation and referral guidance. Main outcome measures: Pathway uptake, patient characteristics, liver disease stratification results and stakeholder feedback. Results: Over the first 12 months 968 patients attended a TE clinic appointment, with raised AST:ALT ratio being the most common single reason for referral (36.9%). Of the total, 222 (22.9%) patients had an elevated liver stiffness (≥8 kPa), in whom 57 (25.7%) had a liver stiffness which was indicative of advanced chronic liver disease. If a traditional approach based on raised liver enzymes (BSG guidance) had been followed, 38.7% of those with significant liver disease (≥8 kPa) would have gone undetected among those referred for either NAFLD or raised AST:ALT ratio. Conclusions: Targeting patients with risk factors for chronic liver disease and stratifying them using TE can detect significant chronic liver disease above and beyond the approach based on liver enzyme elevation
Using a machine learning model to risk stratify for the presence of significant liver disease in a primary care population
Background: Current strategies for detecting significant chronic liver disease (CLD) in the community are based on the extrapolation of diagnostic tests used in secondary care settings. Whilst this approach provides clinical utility, it has limitations related to diagnostic accuracy being predicated on disease prevalence and spectrum bias, which will differ in the community. Machine learning (ML) techniques provide a novel way of identifying significant variables without preconceived bias. As a proof-of-concept study, we wanted to examine the performance of nine different ML models based on both risk factors and abnormal liver enzyme tests in a large community cohort.Methods: Routine demographic and laboratory data was collected on 1,453 patients with risk factors for CLD, including high alcohol consumption, diabetes and obesity, in a community setting in Nottingham (UK) as part of the Scarred Liver project. A total of 87 variables were extracted. Transient elastography (TE) was used to define clinically significant liver fibrosis. The data was split into a training and hold out set. The median age of the cohort was 59, mean body mass index (BMI) 29.7 kg/m2, median TE 5.5 kPa, 49.2% had type 2 diabetes and 20.3% had a TE >8 kPa.Results: The nine different ML models, which included Random Forrest classifier, Support Vector classification and Gradient Boosting classifier, had a range of area under the curve (AUC) statistics of 0.5 to 0.75. Ensemble Stacker model showed the best performance, and this was replicated in the testing dataset (AUC 0.72). Recursive feature elimination found eight variables had a significant impact on model output. The model had superior sensitivity (74%) compared to specificity (60%).Conclusions: ML shows encouraging performance and highlights variables that may have bespoke value for diagnosing community liver disease. Optimising how ML algorithms are integrated into clinical pathways of care and exploring new biomarkers will further enhance diagnostic utility
- …