100 research outputs found

    Treatment Effect Quantification for Time-to-event Endpoints -- Estimands, Analysis Strategies, and beyond

    Full text link
    A draft addendum to ICH E9 has been released for public consultation in August 2017. The addendum focuses on two topics particularly relevant for randomized confirmatory clinical trials: estimands and sensitivity analyses. The need to amend ICH E9 grew out of the realization of a lack of alignment between the objectives of a clinical trial stated in the protocol and the accompanying quantification of the "treatment effect" reported in a regulatory submission. We embed time-to-event endpoints in the estimand framework, and discuss how the four estimand attributes described in the addendum apply to time-to-event endpoints. We point out that if the proportional hazards assumption is not met, the estimand targeted by the most prevalent methods used to analyze time-to-event endpoints, logrank test and Cox regression, depends on the censoring distribution. We discuss for a large randomized clinical trial how the analyses for the primary and secondary endpoints as well as the sensitivity analyses actually performed in the trial can be seen in the context of the addendum. To the best of our knowledge, this is the first attempt to do so for a trial with a time-to-event endpoint. Questions that remain open with the addendum for time-to-event endpoints and beyond are formulated, and recommendations for planning of future trials are given. We hope that this will provide a contribution to developing a common framework based on the final version of the addendum that can be applied to design, protocols, statistical analysis plans, and clinical study reports in the future.Comment: 37 page

    Maximum likelihood estimation of a log-concave density and its distribution function: Basic properties and uniform consistency

    Get PDF
    We study nonparametric maximum likelihood estimation of a log-concave probability density and its distribution and hazard function. Some general properties of these estimators are derived from two characterizations. It is shown that the rate of convergence with respect to supremum norm on a compact interval for the density and hazard rate estimator is at least (log(n)/n)1/3(\log(n)/n)^{1/3} and typically (log(n)/n)2/5(\log(n)/n)^{2/5}, whereas the difference between the empirical and estimated distribution function vanishes with rate op(n1/2)o_{\mathrm{p}}(n^{-1/2}) under certain regularity assumptions.Comment: Published in at http://dx.doi.org/10.3150/08-BEJ141 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm), Version 3 is the extended technical report cited in version

    Assessment of paired binary data

    Get PDF

    logcondens: Computations Related to Univariate Log-Concave Density Estimation

    Get PDF
    Maximum likelihood estimation of a log-concave density has attracted considerable attention over the last few years. Several algorithms have been proposed to estimate such a density. Two of those algorithms, an iterative convex minorant and an active set algorithm, are implemented in the R package logcondens. While these algorithms are discussed elsewhere, we describe in this paper the use of the logcondens package and discuss functions and datasets related to log-concave density estimation contained in the package. In particular, we provide functions to (1) compute the maximum likelihood estimate (MLE) as well as a smoothed log-concave density estimator derived from the MLE, (2) evaluate the estimated density, distribution and quantile functions at arbitrary points, (3) compute the characterizing functions of the MLE, (4) sample from the estimated distribution, and finally (5) perform a two-sample permutation test using a modified Kolmogorov-Smirnov test statistic. In addition, logcondens makes two datasets available that have been used to illustrate log-concave density estimation.

    Integrating Phase 2 into Phase 3 based on an Intermediate Endpoint While Accounting for a Cure Proportion -- with an Application to the Design of a Clinical Trial in Acute Myeloid Leukemia

    Full text link
    For a trial with primary endpoint overall survival for a molecule with curative potential, statistical methods that rely on the proportional hazards assumption may underestimate the power and the time to final analysis. We show how a cure proportion model can be used to get the necessary number of events and appropriate timing via simulation. If Phase 1 results for the new drug are exceptional and/or the medical need in the target population is high, a Phase 3 trial might be initiated after Phase 1. Building in a futility interim analysis into such a pivotal trial may mitigate the uncertainty of moving directly to Phase 3. However, if cure is possible, overall survival might not be mature enough at the interim to support a futility decision. We propose to base this decision on an intermediate endpoint that is sufficiently associated with survival. Planning for such an interim can be interpreted as making a randomized Phase 2 trial a part of the pivotal trial: if stopped at the interim, the trial data would be analyzed and a decision on a subsequent Phase 3 trial would be made. If the trial continues at the interim then the Phase 3 trial is already underway. To select a futility boundary, a mechanistic simulation model that connects the intermediate endpoint and survival is proposed. We illustrate how this approach was used to design a pivotal randomized trial in acute myeloid leukemia, discuss historical data that informed the simulation model, and operational challenges when implementing it.Comment: 23 pages, 3 figures, 3 tables. All code is available on github: https://github.com/numbersman77/integratePhase2.gi

    Marshall's lemma for convex density estimation

    Full text link
    Marshall's [Nonparametric Techniques in Statistical Inference (1970) 174--176] lemma is an analytical result which implies n\sqrt{n}--consistency of the distribution function corresponding to the Grenander [Skand. Aktuarietidskr. 39 (1956) 125--153] estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,)[0,\infty).Comment: Published at http://dx.doi.org/10.1214/074921707000000292 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore