108 research outputs found

    Feasibility and outcome of reproducible clinical interpretation of high-dimensional molecular data: a comparison of two molecular tumor boards

    Get PDF
    BACKGROUND: Structured and harmonized implementation of molecular tumor boards (MTB) for the clinical interpretation of molecular data presents a current challenge for precision oncology. Heterogeneity in the interpretation of molecular data was shown for patients even with a limited number of molecular alterations. Integration of high-dimensional molecular data, including RNA- (RNA-Seq) and whole-exome sequencing (WES), is expected to further complicate clinical application. To analyze challenges for MTB harmonization based on complex molecular datasets, we retrospectively compared clinical interpretation of WES and RNA-Seq data by two independent molecular tumor boards. METHODS: High-dimensional molecular cancer profiling including WES and RNA-Seq was performed for patients with advanced solid tumors, no available standard therapy, ECOG performance status of 0-1, and available fresh-frozen tissue within the DKTK-MASTER Program from 2016 to 2018. Identical molecular profiling data of 40 patients were independently discussed by two molecular tumor boards (MTB) after prior annotation by specialized physicians, following independent, but similar workflows. Identified biomarkers and resulting treatment options were compared between the MTBs and patients were followed up clinically. RESULTS: A median of 309 molecular aberrations from WES and RNA-Seq (n = 38) and 82 molecular aberrations from WES only (n = 3) were considered for clinical interpretation for 40 patients (one patient sequenced twice). A median of 3 and 2 targeted treatment options were identified per patient, respectively. Most treatment options were identified for receptor tyrosine kinase, PARP, and mTOR inhibitors, as well as immunotherapy. The mean overlap coefficient between both MTB was 66%. Highest agreement rates were observed with the interpretation of single nucleotide variants, clinical evidence levels 1 and 2, and monotherapy whereas the interpretation of gene expression changes, preclinical evidence levels 3 and 4, and combination therapy yielded lower agreement rates. Patients receiving treatment following concordant MTB recommendations had significantly longer overall survival than patients receiving treatment following discrepant recommendations or physician's choice. CONCLUSIONS: Reproducible clinical interpretation of high-dimensional molecular data is feasible and agreement rates are encouraging, when compared to previous reports. The interpretation of molecular aberrations beyond single nucleotide variants and preclinically validated biomarkers as well as combination therapies were identified as additional difficulties for ongoing harmonization efforts

    Diagnostic accuracy of 1p/19q codeletion tests in oligodendroglioma:a comprehensive meta-analysis based on a Cochrane Systematic Review

    Get PDF
    Codeletion of chromosomal arms 1p and 19q, in conjunction with a mutation in the isocitrate dehydrogenase 1 or 2 gene, is the molecular diagnostic criterion for oligodendroglioma, IDH mutant and 1p/19q codeleted. 1p/19q codeletion is a diagnostic marker and allows prognostication and prediction of the best drug response within IDH‐mutant tumours. We performed a Cochrane review and simple economic analysis to establish the most sensitive, specific and cost‐effective techniques for determining 1p/19q codeletion status. Fluorescent in situ hybridisation (FISH) and polymerase chain reaction (PCR)‐based loss of heterozygosity (LOH) test methods were considered as reference standard. Most techniques (FISH, chromogenic in situ hybridisation [CISH], PCR, real‐time PCR, multiplex ligation‐dependent probe amplification [MLPA], single nucleotide polymorphism [SNP] array, comparative genomic hybridisation [CGH], array CGH, next‐generation sequencing [NGS], mass spectrometry and NanoString) showed good sensitivity (few false negatives) for detection of 1p/19q codeletions in glioma, irrespective of whether FISH or PCR‐based LOH was used as the reference standard. Both NGS and SNP array had a high specificity (fewer false positives) for 1p/19q codeletion when considered against FISH as the reference standard. Our findings suggest that G banding is not a suitable test for 1p/19q analysis. Within these limits, considering cost per diagnosis and using FISH as a reference, MLPA was marginally more cost‐effective than other tests, although these economic analyses were limited by the range of available parameters, time horizon and data from multiple healthcare organisations

    Evidence to support inclusion of pharmacogenetic biomarkers in randomised controlled trials

    Get PDF
    Pharmacogenetics and biomarkers are becoming normalised as important technologies to improve drug efficacy rates, reduce the incidence of adverse drug reactions, and make informed choices for targeted therapies. However, their wider clinical implementation has been limited by a lack of robust evidence. Suitable evidence is required before a biomarker’s clinical use, and also before its use in a clinical trial. We have undertaken a review of five pharmacogenetic biomarker-guided randomised controlled trials (RCTs) and evaluated the evidence used by these trials to justify biomarker inclusion. We assessed and quantified the evidence cited in published rationale papers, or where these were not available, obtained protocols from trial authors. Very different levels of evidence were provided by the trials. We used these observations to write recommendations for future justifications of biomarker use in RCTs and encourage regulatory authorities to write clear guidelines

    Consensus recommendations of three-dimensional visualization for diagnosis and management of liver diseases

    Get PDF
    Three-dimensional (3D) visualization involves feature extraction and 3D reconstruction of CT images using a computer processing technology. It is a tool for displaying, describing, and interpreting 3D anatomy and morphological features of organs, thus providing intuitive, stereoscopic, and accurate methods for clinical decision-making. It has played an increasingly significant role in the diagnosis and management of liver diseases. Over the last decade, it has been proven safe and effective to use 3D simulation software for pre-hepatectomy assessment, virtual hepatectomy, and measurement of liver volumes in blood flow areas of the portal vein; meanwhile, the use of 3D models in combination with hydrodynamic analysis has become a novel non-invasive method for diagnosis and detection of portal hypertension. We herein describe the progress of research on 3D visualization, its workflow, current situation, challenges, opportunities, and its capacity to improve clinical decision-making, emphasizing its utility for patients with liver diseases. Current advances in modern imaging technologies have promised a further increase in diagnostic efficacy of liver diseases. For example, complex internal anatomy of the liver and detailed morphological features of liver lesions can be reflected from CT-based 3D models. A meta-analysis reported that the application of 3D visualization technology in the diagnosis and management of primary hepatocellular carcinoma has significant or extremely significant differences over the control group in terms of intraoperative blood loss, postoperative complications, recovery of postoperative liver function, operation time, hospitalization time, and tumor recurrence on short-term follow-up. However, the acquisition of high-quality CT images and the use of these images for 3D visualization processing lack a unified standard, quality control system, and homogeneity, which might hinder the evaluation of application efficacy in different clinical centers, causing enormous inconvenience to clinical practice and scientific research. Therefore, rigorous operating guidelines and quality control systems need to be established for 3D visualization of liver to develop it to become a mature technology. Herein, we provide recommendations for the research on diagnosis and management of 3D visualization in liver diseases to meet this urgent need in this research field

    Point estimation for adaptive trial designs II: Practical considerations and guidance

    Get PDF
    In adaptive clinical trials, the conventional end-of-trial point estimate of a treatment effect is prone to bias, that is, a systematic tendency to deviate from its true value. As stated in recent FDA guidance on adaptive designs, it is desirable to report estimates of treatment effects that reduce or remove this bias. However, it may be unclear which of the available estimators are preferable, and their use remains rare in practice. This article is the second in a two-part series that studies the issue of bias in point estimation for adaptive trials. Part I provided a methodological review of approaches to remove or reduce the potential bias in point estimation for adaptive designs. In part II, we discuss how bias can affect standard estimators and assess the negative impact this can have. We review current practice for reporting point estimates and illustrate the computation of different estimators using a real adaptive trial example (including code), which we use as a basis for a simulation study. We show that while on average the values of these estimators can be similar, for a particular trial realization they can give noticeably different values for the estimated treatment effect. Finally, we propose guidelines for researchers around the choice of estimators and the reporting of estimates following an adaptive design. The issue of bias should be considered throughout the whole lifecycle of an adaptive design, with the estimation strategy prespecified in the statistical analysis plan. When available, unbiased or bias-reduced estimates are to be preferred

    Predicting cardiovascular risk in diabetic patients: arewe all on the same side?

    Get PDF
    Cardiovascular diseases are the main reason for morbidity and mortality in diabetic patients, and cardiovascular risk is increased at least twofold in men and at least fourfold in women with diabetes compared to non-diabetic populations. Predictive medicine is of the utmost importance in the clinical care of diabetic patients, since predicting cardiovascular risk is essential for modification of risk factors aimed at prevention or delay of future cardiovascular events. The prediction of cardiovascular risk is a valuable tool within the context of patient-centered care, as it includes active participation of diabetic patients in the decision-making process, resulting in higher compliance with the treatments agreed. However, there are differences among the current guidelines of various international authorities, such as the International Diabetes Federation (IDF), European Society of Cardiology (ESC) / European Association for Study of Diabetes (EASD), American College of Cardiology (ACC) / American Heart Association (AHA), American Diabetes Association (ADA), and National Institute for Health and Care Excellence (NICE), for the prediction of cardiovascular risk in diabetic patients. Furthermore, the clinical use of models with classic risk factors and novel biomarkers that would predict cardiovascular risk in diabetic patients from various populations with acceptable precision poses a challenge. Taking into consideration the global diabetes pandemic and its close association with cardiovascular diseases, there is an urgent need for streamlining of current guidelines on the prediction of cardiovascular risk and its use in clinical practice

    The implementation of pharmacogenetics: evidence and preferences

    Get PDF
    Pharmacogenetics has huge potential to transform the field of medicine and deliver personalised treatments to patients. However, its wider use is limited by many factors, particularly a lack of suitable evidence of efficacy or safety for regulatory approval and clinical use. The evidence required can be difficult to ascertain, presenting three main problems. The first issue is that regulatory guidance for the evidence required is complex and varies greatly between different authorities and contexts. Guidance from the UK Medicines and Healthcare products Regulatory Authority (MHRA) and the US Food and Drug Administration (FDA) was reviewed along with criteria formulated by other industry and academic groups. It was found that there is a clear need for a unified set of standards for evidence gathering in pharmacogenetics. This was strengthened by an analysis of the evidence used by five different randomised controlled trials to justify the inclusion of their pharmacogenetic biomarker. Large variation in the quality and type of this evidence was found. These findings were used to make recommendations for future evidence gathering for trials, regulators, and journals. Additionally, the evidence required for clinical implementation has traditionally been the prospective randomised controlled trial. Gathering information from two novel systematic reviews and meta-analyses of carbamazepine-induced Stevens-Johnson syndrome, it was shown how these sources of observational evidence can produce effect estimates and measures of clinical validity of greater precision than that of a prospective trial. Finally, the level of evidence for a pharmacogenetic test that would be acceptable to the general public is not known. A discrete choice experiment (DCE) was designed to quantify these views. The first step was a systematic review of existing DCEs in this area, to extract useful information from these to inform the work. An extensive programme of qualitative work with healthcare professionals, patients, and the general public then further informed the design of this novel DCE. Participants were randomised to complete one of eight DCEs in different disease areas, with either a ‘high’ evidence scenario or a ‘low’ evidence scenario described. Launched in May 2021, over 2,000 responses were collected and the results were analysed in preference-weighted utility models. Although there was no difference in utility between ‘high’ and ‘low’ evidence tests, several important insights were generated (particularly in regard to data sharing and privacy) that will potentially have large impacts on policy in this area

    Predicting patient outcome using radioclinical features selected with RENT for patients with colorectal cancer

    Get PDF
    Colorectal cancer remains a problem in medicine, costing countless lives each year. The growing amount of data available about these patients have piqued the interest of researchers, as they try to use machine learning to aid diagnosis, decision making, and treatment for these patients. Unfortunately, as the data sets grow, the risk of creating unstable and non-generalizable models increase. The research in this thesis has aimed at investigating how to implement a novel technique called RENT (Repeated Elastic Net Technique) for feature selection. The predictive problem was a binary classification problem on colorectal cancer patients to predict overall survival. The analysis applied repeated stratified k-fold cross-validation with four folds and five repeats to reduce the risk of random subsets causing non-generalizable results. Further, the analysis created 25 000 different RENT models to search through the hyperparameters to find high performance parameter combinations. Each of the 25 000 models were trained with six different Random Forest [RF] hyperparameter combinations and twelve logistic regression hyperparameter combinations, resulting in 450 000 different models. A high performing group of models was collected for one unique combination of hyperparameters. These models had the highest average test performance: accuracy 0.76 ± 0.07, MCC 0.47 ± 0.16, F1 positive class 0.57 ± 0.13, F1 negative class 0.83 ± 0.05, and AUC 0.69 ± 0.08. The results have also shown that the generalization error is lower for a RENT based RF model than non-RENT based RF model. The RENT analysis revealed that patients that died was overrepresented in a group of patients that were the most frequently predicted incorrectly. Finally, the RENT analysis has resulted in a distribution of features that were most frequently selected for high predictive ability. Most of the clinical features in this group has previously been reported as relevant by medical literature. The research and the corresponding framework show promising results to implement a brute-force approach to the RENT analysis, to ensure low generalization error and predictive interpretability. Further research with this framework can support medicine in validating feature importance for patient outcome. The framework could also prove useful in other research fields than medicine, given predictive problems with similar challenges.Tykktarmskreft er fortsatt et problem innen medisin, og koster utallige liv hvert år. Den økende mengden data som er tilgjengelig om disse pasientene har vekket interessen til forskerne, der flere prøver å bruke maskinlæring for å hjelpe diagnostisering, beslutningstaking og behandling for disse pasientene. Dessverre, ettersom datasettene vokser, øker også risikoen for å lage ustabile og ikke-generaliserbare modeller. Forskningen i denne oppgaven har tatt sikte på å undersøke hvordan man implementerer en ny teknikk kalt RENT (Repeated Elastic Net Technique) for variabel seleksjon. Det prediktive problemet var et binært klassifiseringsproblem på pasienter med tykk- og endetarmskreft for å forutsi samlet overlevelse. Analysen brukte gjentatt stratifisert k-foldet kryssvalidering med fire folder og fem repetisjoner for å redusere risikoen for at tilfeldige undergrupper av data fører til ikke-generaliserbare resultater. Videre beregnet analysen 25 000 forskjellige RENT-modeller for å søke gjennom hyperparametrene for å finne høyytelsesparameterkombinasjoner. Hver av de 25 000 modellene ble trent med seks forskjellige hyperparameterkombinasjoner for Random Forest [RF] og tolv hyperparameterkombinasjoner for logistisk regresjons, noe som resulterte i totalt 450 000 forskjellige modeller. En høytytende gruppe modeller ble samlet inn for én unik kombinasjon av hyperparametre. Disse modellene hadde den høyeste gjennomsnittlige testytelsen: «accuracy» 0,76 ± 0,07, MCC 0,47 ± 0,16, F1 positiv klasse 0,57 ± 0,13, F1 negativ klasse 0,83 ± 0,05 og AUC 0,69 ± 0,08. Resultatene har også vist at generaliseringsfeilen er lavere for en RENT-basert RF-modell enn ikke-RENT-basert RF-modell. RENT-analysen avdekket at pasienter som døde var overrepresentert i en pasientgruppe som oftest ble predikert feil. Til slutt har RENT-analysen resultert i en fordeling av variabler som oftest ble valgt for høy prediksjonsevne. De fleste av de kliniske trekkene i denne gruppen er tidligere rapportert som relevante av medisinsk litteratur. Forskningen og det tilhørende rammeverket viser lovende resultater for å implementere en brute-force-tilnærming til RENT-analysen, for å sikre lav generaliseringsfeil og prediktiv tolkbarhet. Ytterligere forskning med dette rammeverket kan bistå medisin i å validere variablers betydning for pasienters prognose. Rammeverket kan også vise seg nyttig innenfor andre forskningsfelt enn medisin, gitt prediktive problemer med lignende utfordringer.M-D

    British Society of Gastroenterology guidelines for the diagnosis and management of cholangiocarcinoma

    Get PDF
    These guidelines for the diagnosis and management of cholangiocarcinoma (CCA) were commissioned by the British Society of Gastroenterology liver section. The guideline writing committee included a multidisciplinary team of experts from various specialties involved in the management of CCA, as well as patient/public representatives from AMMF (the Cholangiocarcinoma Charity) and PSC Support. Quality of evidence is presented using the Appraisal of Guidelines for Research and Evaluation (AGREE II) format. The recommendations arising are to be used as guidance rather than as a strict protocol-based reference, as the management of patients with CCA is often complex and always requires individual patient-centred considerations
    corecore