149 research outputs found

    Understanding the Continual Reassessment Method for Dose Finding Studies: An Overview for Non-Statisticians

    Get PDF
    The Continual Reassessment Method (CRM) has gained popularity since its proposal by O’Quigley et al. [1]. Many variations have been published and discussed in the statistical literature, but there has been little attention to making the design considerations accessible to non-statisticians. As a result, some clinicians or reviewers of clinical trials tend to be wary of the CRM due to safety concerns. This paper presents the CRM in a non-technical way, describing the original CRM with some of its modified versions. It also describes the specifications that define a CRM design, along with two simulated examples of CRMs for illustration

    Clustering and Classification Methods for Gene Expression Data Analysis

    Get PDF
    Efficient use of the large data sets generated by gene expression microarray experiments requires computerized data analysis approaches. In this chapter we briefly describe and illustrate two broad families of commonly used data analysis methods: class discovery and class prediction methods. A wide range of alternative approaches for clustering and classification of gene expression data are available. While differences in efficiency do exist, none of the well established approaches is uniformly superior to others. Choosing an approach requires consideration of the goals of the analysis, the background knowledge, and the specific experimental constraints. The quality of an algorithm is important, but is not in itself a guarantee of the quality of a specific data analysis. Uncertainty, sensitivity analysis and, in the case of classifiers, external validation or cross-validation should be used to support the legitimacy of results of microarray data analyses

    A Likelihood-Based Approach to Early Stopping in Single Arm Phase II Clinical Trials

    Get PDF
    Phase II studies in oncology have evolved over the previous several decades. Currently, the number of drugs in phase II development has increased, and patient eligibility has narrowed due to targeted agents, competing trials and curative therapies in the first-line setting. As a result of these changes, more attention needs to be focused toward conducting more efficient phase II trials. Given the increased difficulty in accruing patients to phase II studies and the ethical concern of treating patients with agents that are ineffective, there is significant motivation to stop a single arm trial early when the investigational agent shows evidence of a low response rate

    Latent Variable Approach to Elicit Continuous Toxicity Scores and Severity Weights for Multiple Toxicities in Dose-Finding Oncology Trials

    Get PDF
    Most dose-finding clinical trials in oncology aim to find the highest dose yielding an acceptable toxicity profile for patients. The conventional dose-finding framework utilizes a binary toxicity endpoint that treats low to moderate toxicities as irrelevant, ignoring potentially harmful combinations of such toxicities. A handful of novel dose- finding methods have been introduced that combine multiple toxicities across varying grades into a composite toxicity severity score. Toxicity scores provide the advantage of accounting for all toxicity information in a patient profile, but calculation of such scores require prior specification of toxicity severity weights to represent the relative toxicity burden each toxicity type of each grade adds to a toxicity profile if observed. Elicitation of severity weights generally rely on subjective specification, and resulting continuous scores may be confusing in clinical settings. In a statistical framework, we propose a novel method of estimating toxicity weights via a cumulative logit model, assuming there to be a latent continuous toxicity score characterized by the set of observed toxicity types and grades a patient exhibits. Toxicity scores are directly associated with an ordinal outcome assigned to toxicity profiles by clinicians, which correspond to simple dose escalation decisions. The toxicity score elicitation method (TSEM) produces an accurate toxicity scoring system through evaluation of a balanced subset of toxicity profiles in terms of severity, and we present an adaptive weight finding algorithm to facilitate this. This approach bridges the gap between relating continuous toxicity scores to clinically logical ordinal outcomes akin to traditional toxicity grades, and provides an objective method for determining toxicity weights and scores

    A Likelihood-Based Approach for Computing the Operating Characteristics of the Standard Phase I Clinical Trial Design

    Get PDF
    In phase I clinical trials, the standard ‘3+3’ design has passed the test of time and survived various sample size adjustments, or other dose-escalation dynamics. The objective of this study is to provide a probabilistic support for analyzing the heuristic performance of the ‘3+3’ design. Our likelihood method is based on the evidential paradigm that uses the likelihood ratio to measure the strength of statistical evidence for one simple hypothesis over the other. We compute the operating characteristics and compare the behavior of the standard algorithm under different hypotheses, levels of evidence, and true (or best guessed) toxicity rates. Given observed toxicities per dose level, the likelihood-ratio is evaluated according to a certain k threshold (level of evidence). Under an assumed true toxicity scenario the following statistical characteristics are computed and compared: i) probability of weak evidence, ii) probability of favoring H1 under H1(analogous to 1-α), iii) probability of favoring H2 under H2 (analogous to 1-β). This likelihood method allows consistent inferences to be made and evidence to be quantified regardless of cohort size. Moreover, this approach can be extended and used in phase I designs for identifying the highest acceptably safe dose and is akin to the sequential probability ratio test

    The Proportional Odds Model for Assessing Rater Agreement with Multiple Modalities

    Get PDF
    In this paper, we develop a model for evaluating an ordinal rating systems where we assume that the true underlying disease state is continuous in nature. Our approach in motivated by a dataset with 35 microscopic slides with 35 representative duct lesions of the pancreas. Each of the slides was evaluated by eight raters using two novel rating systems (PanIN illustrations and PanIN nomenclature),where each rater used each systems to rate the slide with slide identity masked between evaluations. We find that the two methods perform equally well but that differentiation of higher grade lesions is more consistent across raters than differentiation across raters for lower grade lesions. A proportional odds model is assumed, which allows us to estimate rater-specific thresholds for comparing agreement. In this situation where we have two methods of rating, we can determine whether the two methods have the same thresholds and whether or not raters perform equivalently across methods. Unlike some other model-based approaches for measuring agreement, we focus on the interpretation of the model parameters and their scientific relevance. We compare posterior estimates of rater-specific parameters across raters to see if they are implementing the intended rating system in the same manner. Estimated standard deviation distributions are used to make inferences as to whether raters are consistent and whether there are differences in rating behaviors in the two rating systems under comparison

    MergeMaid: R Tools for Merging and Cross-Study Validation of Gene Expression Data

    Get PDF
    Cross-study validation of gene expression investigations is critical in genomic analysis. We developed an R package and associated object definitions to merge and visualize multiple gene expression datasets. Our merging functions use arbitrary character IDs and generate objects that can efficiently support a variety of joint analyses. Visualization tools support exploration and cross-study validation of the data, without requiring normalization across platforms. Tools include “integrative correlation” plots that is, scatterplots of all pairwise correlations in one study against the corresponding pairwise correlations of another, both for individual genes and all genes combined. Gene-specific plots can be used to identify genes whose changes are reliably measured across studies. Visualizations also include scatterplots of gene-specific statistics quantifying relationships between expression and phenotypes of interest, using linear, logistic and Cox regression. Availability: Free open source from url http://www.bioconductor.org. Contact: Xiaogang Zhong [email protected] Supplementary information: Documentation available with the package

    Cross-study Validation and Combined Analysis of Gene Expression Microarray Data

    Get PDF
    Investigations of transcript levels on a genomic scale using hybridization-based arrays led to formidable advances in our understanding of the biology of many human illnesses. At the same time, these investigations have generated controversy, because of the probabilistic nature of the conclusions, and the surfacing of noticeable discrepancies between the results of studies addressing the same biological question. In this article we present simple and effective data analysis and visualization tools for gauging the degree to which the finding of one study are reproduced by others, and for integrating multiple studies in a single analysis. We describe these approaches in the context of studies of breast cancer, and illustrate that it is possible to identify a substantial, biologically relevant subset of the human genome within which hybridization results are reproducible. The subset generally varies with the platforms used, the tissues studied, and the populations being sampled. Despite important differences, it is also possible to develop simple expression measures that allow comparison across platforms, studies, labs and populations. Important biological signal is often preserved or enhanced. Cross-study validation and combination of microarray results requires careful, but not overly complex, statistical thinking, and can become a routine component of genomic analysis

    OPTIMIZED CROSS-STUDY ANALYSIS OF MICROARRAY-BASED PREDICTORS

    Get PDF
    Background: Microarray-based gene expression analysis is widely used in cancer research to discover molecular signatures for cancer classification and prediction. In addition to numerous independent profiling projects, a number of investigators have analyzed multiple published data sets for purposes of cross-study validation. However, the diverse microarray platforms and technical approaches make direct comparisons across studies difficult, and without means to identify aberrant data patterns, less than optimal. To address this issue, we previously developed an integrative correlation approach to systematically address agreement of gene expression measurements across studies, providing a basis for cross-study validation analysis. Here we generalize this methodology to provide a metric for evaluating the overall efficacy of preprocessing and cross-referencing, and explore optimal combinations of filtering and cross-referencing strategies. We operate in the context of validating prognostic breast cancer gene expression signatures on data reported by three different groups, each using a different platform. Results: To evaluate overall cross-platform reproducibility in the context of a specific prediction problem, we suggest integrative association, that is the cross-study correlation of gene-specific measure of association with the phenotype predicted. Specifically, in this paper we use the correlation among the Cox proportional hazard coefficients for association of gene expression to relapse free survival (RFS). Gene filtering by integrative correlation to select reproducible genes emerged as the key factor to increase the integrative association, while alternative methods of gene cross-referencing and gene filtering proved only to modestly improve the overall reproducibility. Patient selection was another major factor affecting the validation process. In particular, in one of the studies considered, gene expression association with RFS varied across subsets of patients that differ by their ascertainment criteria. One of the subsets proved to be highly consistent with other studies, while others showed significantly lower consistency. Third, as expected, use of cluster-specific mean expression profiles in the Cox model yielded more generalizable results than expression data from individual genes. Finally, by using our approach we were able to validate the association between the breast cancer molecular classes proposed by Sorlie et al. and RFS. Conclusions: This paper provides a simple, practical and comprehensive technique for measuring consistency of molecular classification results across microarray platforms, without requiring subjective judgments about membership of samples in putative clusters. This methodology will be of value in consistently typing breast and other cancers across different studies and platforms in the future. Although the tumor subtypes considered here have been previously validated by their proponents, this is the first independent validation, and the first to include the Affymetrix platform

    Cross-platform Comparison of Two Pancreatic Cancer Phenotypes

    Get PDF
    Model-based approaches for combining gene expression data from multiple high throughput platforms can be sensitive to technological artifacts when the number of samples in each platform is small. This paper proposes simple tools for quantifying concordance in a small study of pancreatic cancer cells lines with an emphasis on visualizations that uncover intra- and inter-platform variation. Using this approach, we identify several transcripts from the integrative analysis whose over-or under-expression in pancreatic cancer cell lines was validated by qPCR
    corecore