1,404 research outputs found
Implications of using whole genome sequencing to test unselected populations for high risk breast cancer genes: a modelling study.
BACKGROUND: The decision to test for high risk breast cancer gene mutations is traditionally based on risk scores derived from age, family and personal cancer history. Next generation sequencing technologies such as whole genome sequencing (WGS) make wider population testing more feasible. In the UK's 100,000 Genomes Project, mutations in 16 genes including BRCA1 and BRCA2 are to be actively sought regardless of clinical presentation. The implications of deploying this approach at scale for patients and clinical services are unclear. In this study we aimed to model the effect of using WGS to test an unselected UK population for high risk BRCA1 and BRCA2 gene variants to inform the debate around approaches to secondary genomic findings. METHODS: We modelled the test performance of WGS for identifying pathogenic BRCA1 and BRCA2 mutations in an unselected hypothetical population of 100,000 UK women, using published literature to derive model input parameters. We calculated analytic and clinical validity, described potential health outcomes and highlighted current areas of uncertainty. We also performed a sensitivity analysis in which we re-ran the model 100,000 times to investigate the effect of varying input parameters. RESULTS: In our models WGS was predicted to identify correctly 93 pathogenic BRCA1 mutations and 151 BRCA2 mutations in 120 and 200 women respectively, resulting in an analytic sensitivity of 75.5-77.5 %. Of 244 women with identified pathogenic mutations, we estimated that 132 (range 121-198) would develop breast cancer, so could potentially be helped by intervention. We also predicted that breast cancer would occur in 41 women (range 36-62) incorrectly identified with no pathogenic mutations and in 12,460 women without BRCA1 or BRCA2 mutations. There was considerable uncertainty about the penetrance of mutations in people without a family history of disease and the appropriate threshold of absolute disease risk for clinical action, which impacts on judgements about the clinical utility of intervention. CONCLUSIONS: This simple model demonstrates the need for robust processes to support the testing for secondary genomic findings in unselected populations that acknowledge levels of uncertainty about the clinical validity and clinical utility of testing positive for a cancer risk gene
Time to revisit Geoffrey Rose: strategies for prevention in the genomic era?
Geoffrey Rose, in his article “Sick individuals and sick populations” highlighted the need to distinguish between prevention for populations and prevention for high risk individuals. In this article we revisit some of these concepts in light of the burgeoning literature on “personalised medicine” and of findings from our investigations into personalised cancer prevention as part of an EU research gene-environment study on hormone related cancers, the Collaborative Oncological Gene- environment Study (COGS). We suggest that Rose’s high risk strategy may be modified by segmenting the population by risk (in our example genetic risk) into a number of individual strata, to each of which differential interventions may be applied. We call this “stratified prevention”, and argue that such an approach will lead to consequential advantages in efficiency, effectiveness and harm minimisation
Cost-effectiveness and Benefit-to-Harm Ratio of Risk-Stratified Screening for Breast Cancer: A Life-Table Model.
IMPORTANCE: The age-based or "one-size-fits-all" breast screening approach does not take into account the individual variation in risk. Mammography screening reduces death from breast cancer at the cost of overdiagnosis. Identifying risk-stratified screening strategies with a more favorable ratio of overdiagnoses to breast cancer deaths prevented would improve the quality of life of women and save resources. OBJECTIVE: To assess the benefit-to-harm ratio and the cost-effectiveness of risk-stratified breast screening programs compared with a standard age-based screening program and no screening. DESIGN, SETTING, AND POPULATION: A life-table model was created of a hypothetical cohort of 364 500 women in the United Kingdom, aged 50 years, with follow-up to age 85 years, using (1) findings of the Independent UK Panel on Breast Cancer Screening and (2) risk distribution based on polygenic risk profile. The analysis was undertaken from the National Health Service perspective. INTERVENTIONS: The modeled interventions were (1) no screening, (2) age-based screening (mammography screening every 3 years from age 50 to 69 years), and (3) risk-stratified screening (a proportion of women aged 50 years with a risk score greater than a threshold risk were offered screening every 3 years until age 69 years) considering each percentile of the risk distribution. All analyses took place between July 2016 and September 2017. MAIN OUTCOMES AND MEASURES: Overdiagnoses, breast cancer deaths averted, quality-adjusted life-years (QALYs) gained, costs in British pounds, and net monetary benefit (NMB). Probabilistic sensitivity analyses were used to assess uncertainty around parameter estimates. Future costs and benefits were discounted at 3.5% per year. RESULTS: The risk-stratified analysis of this life-table model included a hypothetical cohort of 364 500 women followed up from age 50 to 85 years. As the risk threshold was lowered, the incremental cost of the program increased linearly, compared with no screening, with no additional QALYs gained below 35th percentile risk threshold. Of the 3 screening scenarios, the risk-stratified scenario with risk threshold at the 70th percentile had the highest NMB, at a willingness to pay of £20 000 (US 26 888) vs £537 985 (US $720 900) less, would have 26.7% vs 71.4% fewer overdiagnoses, and would avert 2.9% vs 9.6% fewer breast cancer deaths, respectively. CONCLUSIONS AND RELEVANCE: Not offering breast cancer screening to women at lower risk could improve the cost-effectiveness of the screening program, reduce overdiagnosis, and maintain the benefits of screening
Meta-analysis confirms BCL2 is an independent prognostic marker in breast cancer.
BACKGROUND: A number of protein markers have been investigated as prognostic adjuncts in breast cancer but their translation into clinical practice has been impeded by a lack of appropriate validation. Recently, we showed that BCL2 protein expression had prognostic power independent of current used standards. Here, we present the results of a meta-analysis of the association between BCL2 expression and both disease free survival (DFS) and overall survival (OS) in female breast cancer. METHODS: Reports published in 1994-2006 were selected for the meta-analysis using a search of PubMed. Studies that investigated the role of BCL2 expression by immunohistochemistry with a sample size greater than 100 were included. Seventeen papers reported the results of 18 different series including 5,892 cases with an average median follow-up of 92.1 months. RESULTS: Eight studies investigated DFS unadjusted for other variables in 2,285 cases. The relative hazard estimates ranged from 0.85 - 3.03 with a combined random effects estimate of 1.66 (95%CI 1.25 - 2.22). The effect of BCL2 on DFS adjusted for other prognostic factors was reported in 11 studies and the pooled random effects hazard ratio estimate was 1.58 (95%CI 1.29-1.94). OS was investigated unadjusted for other variables in eight studies incorporating 3,910 cases. The hazard estimates ranged from 0.99-4.31 with a pooled estimate of risk of 1.64 (95%CI 1.36-2.0). OS adjusted for other parameters was evaluated in nine series comprising 3,624 cases and the estimates for these studies ranged from 1.10 to 2.49 with a pooled estimate of 1.37 (95%CI 1.19-1.58). CONCLUSION: The meta-analysis strongly supports the prognostic role of BCL2 as assessed by immunohistochemistry in breast cancer and shows that this effect is independent of lymph node status, tumour size and tumour grade as well as a range of other biological variables on multi-variate analysis. Large prospective studies are now needed to establish the clinical utility of BCL2 as an independent prognostic marker
The effect of rare variants on inflation of the test statistics in case-control analyses.
BACKGROUND: The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. RESULTS: We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. CONCLUSIONS: In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.This work was supported by a grant from Cancer Research UK (C490/A16561). AP is funded by a Medical Research Council studentship.This is the final published version. It first appeared at http://dx.doi.org/10.1186%2Fs12859-015-0496-1
The admixture maximum likelihood test to test for association between rare variants and disease phenotypes.
BACKGROUND: The development of genotyping arrays containing hundreds of thousands of rare variants across the genome and advances in high-throughput sequencing technologies have made feasible empirical genetic association studies to search for rare disease susceptibility alleles. As single variant testing is underpowered to detect associations, the development of statistical methods to combine analysis across variants - so-called "burden tests" - is an area of active research interest. We previously developed a method, the admixture maximum likelihood test, to test multiple, common variants for association with a trait of interest. We have extended this method, called the rare admixture maximum likelihood test (RAML), for the analysis of rare variants. In this paper we compare the performance of RAML with six other burden tests designed to test for association of rare variants. RESULTS: We used simulation testing over a range of scenarios to test the power of RAML compared to the other rare variant association testing methods. These scenarios modelled differences in effect variability, the average direction of effect and the proportion of associated variants. We evaluated the power for all the different scenarios. RAML tended to have the greatest power for most scenarios where the proportion of associated variants was small, whereas SKAT-O performed a little better for the scenarios with a higher proportion of associated variants. CONCLUSIONS: The RAML method makes no assumptions about the proportion of variants that are associated with the phenotype of interest or the magnitude and direction of their effect. The method is flexible and can be applied to both dichotomous and quantitative traits and allows for the inclusion of covariates in the underlying regression model. The RAML method performed well compared to the other methods over a wide range of scenarios. Generally power was moderate in most of the scenarios, underlying the need for large sample sizes in any form of association testing.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
Understanding of prognosis in non-metastatic prostate cancer: a randomised comparative study of clinician estimates measured against the PREDICT prostate prognostic model
Abstract: PREDICT Prostate is an individualised prognostic model that provides long-term survival estimates for men diagnosed with non-metastatic prostate cancer (www.prostate.predict.nhs.uk). In this study clinician estimates of survival were compared against model predictions and its potential value as a clinical tool was assessed. Prostate cancer (PCa) specialists were invited to participate in the study. 190 clinicians (63% urologists, 17% oncologists, 20% other) were randomised into two groups and shown 12 clinical vignettes through an online portal. Each group viewed opposing vignettes with clinical information alone, or alongside PREDICT Prostate estimates. 15-year clinician survival estimates were compared against model predictions and reported treatment recommendations with and without seeing PREDICT estimates were compared. 155 respondents (81.6%) reported counselling new PCa patients at least weekly. Clinician estimates of PCa-specific mortality exceeded PREDICT estimates in 10/12 vignettes. Their estimates for treatment survival benefit at 15 years were over-optimistic in every vignette, with mean clinician estimates more than 5-fold higher than PREDICT Prostate estimates. Concomitantly seeing PREDICT Prostate estimates led to significantly lower reported likelihoods of recommending radical treatment in 7/12 (58%) vignettes, particularly in older patients. These data suggest clinicians overestimate cancer-related mortality and radical treatment benefit. Using an individualised prognostic tool may help reduce overtreatment
Recommended from our members
Evidence of a Causal Association Between Cancer and Alzheimer’s Disease: a Mendelian Randomization Analysis
Abstract: While limited observational evidence suggests that cancer survivors have a decreased risk of developing Alzheimer’s disease (AD), and vice versa, it is not clear whether this relationship is causal. Using a Mendelian randomization approach that provides evidence of causality, we found that genetically predicted lung cancer (OR 0.91, 95% CI 0.84–0.99, p = 0.019), leukemia (OR 0.98, 95% CI 0.96–0.995, p = 0.012), and breast cancer (OR 0.94, 95% CI 0.89–0.99, p = 0.028) were associated with 9.0%, 2.4%, and 5.9% lower odds of AD, respectively, per 1-unit higher log odds of cancer. When genetic predictors of all cancers were pooled, cancer was associated with 2.5% lower odds of AD (OR 0.98, 95% CI 0.96–0.988, p = 0.00027) per 1-unit higher log odds of cancer. Finally, genetically predicted smoking-related cancers showed a more robust inverse association with AD than non-smoking related cancers (OR 0.95, 95% CI 0.92–0.98, p = 0.0026, vs. OR 0.98, 95% CI 0.97–0.995, p = 0.0091)
Models predicting survival to guide treatment decision-making in newly diagnosed primary non-metastatic prostate cancer: a systematic review.
OBJECTIVES: Men diagnosed with non-metastatic prostate cancer require standardised and robust long-term prognostic information to help them decide on management. Most currently-used tools use short-term and surrogate outcomes. We explored the evidence base in the literature on available pre-treatment, prognostic models built around long-term survival and assess the accuracy, generalisability and clinical availability of these models. DESIGN: Systematic literature review, pre-specified and registered on PROSPERO (CRD42018086394). DATA SOURCES: MEDLINE, Embase and The Cochrane Library were searched from January 2000 through February 2018, using previously-tested search terms. ELIGIBILITY CRITERIA: Inclusion required a multivariable model prognostic model for non-metastatic prostate cancer, using long-term survival data (defined as ≥5 years), which was not treatment-specific and usable at the point of diagnosis. DATA EXTRACTION AND SYNTHESIS: Title, abstract and full-text screening were sequentially performed by three reviewers. Data extraction was performed for items in the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies checklist. Individual studies were assessed using the new Prediction model Risk Of Bias ASsessment Tool. RESULTS: Database searches yielded 6581 studies after deduplication. Twelve studies were included in the final review. Nine were model development studies using data from over 231 888 men. However, only six of the nine studies included any conservatively managed cases and only three of the nine included treatment as a predictor variable. Every included study had at least one parameter for which there was high risk of bias, with failure to report accuracy, and inadequate reporting of missing data common failings. Three external validation studies were included, reporting two available models: The University of California San Francisco (UCSF) Cancer of the Prostate Risk Assessment score and the Cambridge Prognostic Groups. Neither included treatment effect, and both had potential flaws in design, but represent the most robust and usable prognostic models currently available. CONCLUSION: Few long-term prognostic models exist to inform decision-making at diagnosis of non-metastatic prostate cancer. Improved models are required to inform management and avoid undertreatment and overtreatment of non-metastatic prostate cancer.The Urology Foundation - Research Scholarship
- …