3,693 research outputs found

    Point and interval estimation in two-stage adaptive designs with time to event data and biomarker-driven subpopulation selection

    Get PDF
    In personalized medicine, it is often desired to determine if all patients or only a subset of them benefit from a treatment. We consider estimation in two‐stage adaptive designs that in stage 1 recruit patients from the full population. In stage 2, patient recruitment is restricted to the part of the population, which, based on stage 1 data, benefits from the experimental treatment. Existing estimators, which adjust for using stage 1 data for selecting the part of the population from which stage 2 patients are recruited, as well as for the confirmatory analysis after stage 2, do not consider time to event patient outcomes. In this work, for time to event data, we have derived a new asymptotically unbiased estimator for the log hazard ratio and a new interval estimator with good coverage probabilities and probabilities that the upper bounds are below the true values. The estimators are appropriate for several selection rules that are based on a single or multiple biomarkers, which can be categorical or continuous

    Methods of sample size calculation for clinical trials

    Get PDF
    Sample size calculations should be an important part of the design of a trial, but are researchers choosing sensible trial sizes? This thesis looks at ways of determining appropriate sample sizes for Normal, binary and ordinal data. The inadequacies of existing sample size and power calculation software and methods are considered, and new software is offered that will be of more use to researchers planning randomised clinical trials. The software includes the capacity to assess the power and required sample size for incomplete block crossover trial designs for Normal data. Following on from these the difference between calculated power for published trials and the actual results are investigated. As a result, the appropriateness of the standard equations to determine a sample size is questioned- in particular the effect of using a variance estimate based on a sample variance from a pilot study is considered. Taking into account the distribution of this statistic alternative approaches beyond power are considered that take into account the uncertainty in sample variance. Software is also presented that will allow these new types of sample size and Expected Power calculations to be carried out

    Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes

    Get PDF
    Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes

    Statistical considerations of noninferiority, bioequivalence and equivalence testing in biosimilars studies

    Full text link
    In recent years, the development of follow-on biological products (biosimilars) has received increasing attention. The dissertation covers statistical methods related to three topics of Non-inferiority (NI), Bioequivalence (BE) and Equivalence in demonstrating biosimilarity. For NI, one of the key requirements is constancy assumption, that is, the effect of reference treatment is the same in current NI trials as in historical superiority trials. However if a covariate interacts with the treatment arms, then changes in distribution of this covariate will result in violation of constancy assumption. We propose a modified covariate-adjustment fixed margin method, and recommend it based on its performance characteristics in comparison with other methods. Topic two is related to BE inference for log-normal distributed data. Two drugs are bioequivalent if the difference of a pharmacokinetics (PK) parameter of two products falls within prespecified margins. In the presence of unspecified variances, existing methods like two one-sided tests and Bayesian analysis in BE setting limit our knowledge on the extent that inference of BE is affected by the variability of the PK parameter. We propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for difference of PK parameter. The advantage of the proposed method over existing methods is it helps identify range of variances where BE is more likely to be achieved. In the third topic, we extend the proposed likelihood method for Equivalence inference, where data is often normal distributed. In this part, we demonstrate an additional advantage of the proposed method over current analysis methods such as likelihood ratio test and Bayesian analysis in Equivalence setting. The proposed likelihood method produces results that are same or comparable to current analysis methods in general case when model parameters are independent. However it yields better results in special cases when model parameters are dependent, for example the ratio of variances is directly proportional to the ratio of means. Our research results suggest the proposed likelihood method serves a better alternative than the current analysis methods to address BE/Equivalence inference
    corecore