27 research outputs found

    A false sense of security?:can tiered approach be trusted to accurately classify immunogenicity samples?

    Get PDF
    Detecting and characterizing of anti-drug antibodies (ADA) against a protein therapeutic are crucially important to monitor the unwanted immune response. Usually a multi-tiered approach that initially rapidly screens for positive samples that are subsequently confirmed in a separate assay is employed for testing of patient samples for ADA activity. In this manuscript we evaluate the ability of different methods used to classify subject with screening and competition based confirmatory assays. We find that for the overall performance of the multi-stage process the method used for confirmation is most important where a t-test is best when differences are moderate to large. Moreover we find that, when differences between positive and negative samples are not sufficiently large, using a competition based confirmation step does yield poor classification of positive samples

    Extrapolation of efficacy and other data to support the development of new medicines for children:a systematic review of methods

    Get PDF
    Objective When developing new medicines for children, the potential to extrapolate from adult data to reduce the experimental burden in children is well recognised. However, significant assumptions about the similarity of adults and children are needed for extrapolations to be biologically plausible. We reviewed the literature to identify statistical methods that could be used to optimise extrapolations in paediatric drug development programmes. Methods Web of Science was used to identify papers proposing methods relevant for using data from a ‘source population’ to support inferences for a ‘target population’. Four key areas of methods development were targeted: paediatric clinical trials, trials extrapolating efficacy across ethnic groups or geographic regions, the use of historical data in contemporary clinical trials and using short-term endpoints to support inferences about long-term outcomes. Results Searches identified 626 papers of which 52 met our inclusion criteria. From these we identified 102 methods comprising 58 Bayesian and 44 frequentist approaches. Most Bayesian methods (n = 54) sought to use existing data in the source population to create an informative prior distribution for a future clinical trial. Of these, 46 allowed the source data to be down-weighted to account for potential differences between populations. Bayesian and frequentist versions of methods were found for assessing whether key parameters of source and target populations are commensurate (n = 34). Fourteen frequentist methods synthesised data from different populations using a joint model or a weighted test statistic. Conclusions Several methods were identified as potentially applicable to paediatric drug development. Methods which can accommodate a heterogeneous target population and which allow data from a source population to be down-weighted are preferred. Methods assessing the commensurability of parameters may be used to determine whether it is appropriate to pool data across age groups to estimate treatment effects

    An information-theoretic approach for selecting arms in clinical trials

    Get PDF
    The question of selecting the ‘best’ among different choices is a common problem in statistics. In drug development, our motivating setting, the question becomes, for example, which treatment gives the best response rate. Motivated by recent developments in the theory of context‐dependent information measures, we propose a flexible response‐adaptive experimental design based on a novel criterion governing treatment arm selections which can be used in adaptive experiments with simple (e.g. binary) and complex (e.g. co‐primary, ordinal or nested) end points. It was found that, for specific choices of the context‐dependent measure, the criterion leads to a reliable selection of the correct arm without any parametric or monotonicity assumptions and provides noticeable gains in settings with costly observations. The asymptotic properties of the design are studied for different allocation rules, and the small sample size behaviour is evaluated in simulations in the context of phase II clinical trials with different end points. We compare the proposed design with currently used alternatives and discuss its practical implementation

    Dose-escalation strategies which utilise subgroup information

    Get PDF
    Dose‐escalation trials commonly assume a homogeneous trial population to identify a single recommended dose of the experimental treatment for use in future trials. Wrongly assuming a homogeneous population can lead to a diluted treatment effect. Equally, exclusion of a subgroup that could in fact benefit from the treatment can cause a beneficial treatment effect to be missed. Accounting for a potential subgroup effect (ie, difference in reaction to the treatment between subgroups) in dose‐escalation can increase the chance of finding the treatment to be efficacious in a larger patient population. A standard Bayesian model‐based method of dose‐escalation is extended to account for a subgroup effect by including covariates for subgroup membership in the dose‐toxicity model. A stratified design performs well but uses available data inefficiently and makes no inferences concerning presence of a subgroup effect. A hypothesis test could potentially rectify this problem but the small sample sizes result in a low‐powered test. As an alternative, the use of spike and slab priors for variable selection is proposed. This method continually assesses the presence of a subgroup effect, enabling efficient use of the available trial data throughout escalation and in identifying the recommended dose(s). A simulation study, based on real trial data, was conducted and this design was found to be both promising and feasible

    Sample size reassessment and hypothesis testing in adaptive survival trials

    Get PDF
    Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies

    Tilting the lasso by knowledge-based post-processing

    Get PDF
    Background It is useful to incorporate biological knowledge on the role of genetic determinants in predicting an outcome. It is, however, not always feasible to fully elicit this information when the number of determinants is large. We present an approach to overcome this difficulty. First, using half of the available data, a shortlist of potentially interesting determinants are generated. Second, binary indications of biological importance are elicited for this much smaller number of determinants. Third, an analysis is carried out on this shortlist using the second half of the data. Results We show through simulations that, compared with adaptive lasso, this approach leads to models containing more biologically relevant variables, while the prediction mean squared error (PMSE) is comparable or even reduced. We also apply our approach to bone mineral density data, and again final models contain more biologically relevant variables and have reduced PMSEs. Conclusion Our method leads to comparable or improved predictive performance, and models with greater face validity and interpretability with feasible incorporation of biological knowledge into predictive models

    Loss Functions in Restricted Parameter Spaces and Their Bayesian Applications

    Get PDF
    Squared error loss remains the most commonly used loss function for constructing a Bayes estimator of the parameter of interest. However, it can lead to suboptimal solutions when a parameter is defined on a restricted space. It can also be an inappropriate choice in the context when an extreme overestimation and/or underestimation results in severe consequences and a more conservative estimator is preferred. We advocate a class of loss functions for parameters defined on restricted spaces which infinitely penalize boundary decisions like the squared error loss does on the real line. We also recall several properties of loss functions such as symmetry, convexity and invariance. We propose generalizations of the squared error loss function for parameters defined on the positive real line and on an interval. We provide explicit solutions for corresponding Bayes estimators and discuss multivariate extensions. Four well-known Bayesian estimation problems are used to demonstrate inferential benefits the novel Bayes estimators can provide in the context of restricted estimation

    A review of statistical updating methods for clinical prediction models

    Get PDF
    A clinical prediction model (CPM) is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new CPM for each population and context, however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing CPMs already developed for use in similar contexts or populations. In addition, CPMs commonly become miscalibrated over time, and need replacing or updating. In this paper we review a range of approaches for re-using and updating CPMs; these fall in three main categories: simple coefficient updating; combining multiple previous CPMs in a meta-model; and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the UK: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing CPMs to a new population or context, and these should be implemented rather than developing a new CPM from scratch, using a breadth of complementary statistical methods

    Modeling predictors of latent classes in regression mixture models

    Get PDF
    The purpose of this study is to provide guidance on a process for including latent class predictors in regression mixture models. We first examine the performance of current practice for using the 1-step and 3-step approaches where the direct covariate effect on the outcome is omitted. None of the approaches show adequate estimates of model parameters. Given that Step 1 of the 3-step approach shows adequate results in class enumeration, we suggest using an alternative approach: (a) decide the number of latent classes without predictors of latent classes, and (b) bring the latent class predictors into the model with the inclusion of hypothesized direct covariate effects. Our simulations show that this approach leads to good estimates for all model parameters. The proposed approach is demonstrated by using empirical data to examine the differential effects of family resources on students’ academic achievement outcome. Implications of the study are discussed

    The R package MAMS for designing multi-arm multi-stage clinical trials

    Get PDF
    In the early stages of drug development there is often uncertainty about the most promising among a set of different treatments, different doses of the same treatment, or combinations of treatments. Multi-arm multi-stage (MAMS) clinical studies provide an efficient solution to determine which intervention is most promising. In this paper we discuss the R package MAMS that allows designing such studies within the group-sequential framework. The package implements MAMS studies with normal, binary, ordinal, or timeto-event endpoints in which either the single best treatment or all promising treatments are continued at the interim analyses. Additionally unexpected design modifications can be accounted for via the use of the conditional error approach. We provide illustrative examples of the use of the package based on real trial designs
    corecore