3 research outputs found

    Applicability and added value of novel methods to improve drug development in rare diseases

    Get PDF
    The ASTERIX project developed a number of novel methods suited to study small populations. The objective of this exercise was to evaluate the applicability and added value of novel methods to improve drug development in small populations, using real world drug development programmes as reported in European Public Assessment Reports. The applicability and added value of thirteen novel methods developed within ASTERIX were evaluated using data from 26 European Public Assessment Reports (EPARs) for orphan medicinal products, representative of rare medical conditions as predefined through six clusters. The novel methods included were 'innovative trial designs' (six methods), 'level of evidence' (one method), 'study endpoints and statistical analysis' (four methods), and 'meta-analysis' (two methods) and they were selected from the methods developed within ASTERIX based on their novelty; methods that discussed already available and applied strategies were not included for the purpose of this validation exercise. Pre-requisites for application in a study were systematized for each method, and for each main study in the selected EPARs it was assessed if all pre-requisites were met. This direct applicability using the actual study design was firstly assessed. Secondary, applicability and added value were explored allowing changes to study objectives and design, but without deviating from the context of the drug development plan. We evaluated whether differences in applicability and added value could be observed between the six predefined condition clusters. Direct applicability of novel methods appeared to be limited to specific selected cases. The applicability and added value of novel methods increased substantially when changes to the study setting within the context of drug development were allowed. In this setting, novel methods for extrapolation, sample size re-assessment, multi-armed trials, optimal sequential design for small sample sizes, Bayesian sample size re-estimation, dynamic borrowing through power priors and fall-back tests for co-primary endpoints showed most promise - applicable in more than 40% of evaluated EPARs in all clusters. Most of the novel methods were applicable to conditions in the cluster of chronic and progressive conditions, involving multiple systems/organs. Relatively fewer methods were applicable to acute conditions with single episodes. For the chronic clusters, Goal Attainment Scaling was found to be particularly applicable as opposed to other (non-chronic) clusters. Novel methods as developed in ASTERIX can improve drug development programs. Achieving optimal added value of these novel methods often requires consideration of the entire drug development program, rather than reconsideration of methods for a specific trial. The novel methods tested were mostly applicable in chronic conditions, and acute conditions with recurrent episodes. The online version of this article (10.1186/s13023-018-0925-0) contains supplementary material, which is available to authorized users

    Optimal exact tests for multiple binary endpoints

    No full text
    In der jüngeren Vergangenheit hat sich das Interesse an der Erforschung seltener Krankheiten verstärkt. Ebenso besteht großes Interesse daran, die Wirksamkeit von Behandlungen in speziellen kleinen Patientenpopulationen zu untersuchen. Dabei ist jedoch die Anzahl der Patienten, die in eine einzelne Studie eingeschlossen werden können, naturgemäß beschränkt. Die geringen Fallzahlen solcher Studien erschweren die statistische Analyse durch hohe Variabilität von Schätzern und durch möglicherweise unzureichende Genauigkeit asymptotischer Verteilungsapproximationen, die jedoch die Grundlage vieler inferenzstatistischer Verfahren sind. Zusätzlich wird in vielen klinischen Studien der Behandlungseffekt anhand mehrerer Zielvariablen beurteilt. Das kann zu einer vorteilhaften Erhöhung der verfügbaren Gesamtinformation führen. Allerdings muss bedacht werden, dass bei fehlendem Behandlungseffekt die Wahrscheinlichkeit für einen falsch positiven Effekt in zumindest einer Zielgröße mit der Zahl der beobachteten Zielgrößen steigt. Eine konfirmatorische statistische Analyse muss daher so ausgeführt werden, dass diese studienbezogene Wahrscheinlichkeit für eine Fehler erster Art das vorgegebene Signifikanzniveau nicht überschreitet. Das Ziel dieser Arbeit ist die Entwicklung von exakten Hypothesentests für den Vergleich von zwei Behandlungsgruppen anhand multipler binärer Zielgrößen, die gewisse Optimalitätskriterien erfüllen und gleichzeitig die studienbezogene Wahrscheinlichkeit für eine Fehler erster Art kontrollieren. Dazu wird in einem ersten Ansatz die gemeinsame diskrete Permutationsverteilung der Teststatistiken von marginalen exakten Tests nach Fisher herangezogen. Für diese Verteilung werden multivariate Ablehnbereiche definiert, sodass entweder die Ausschöpfung des vorgegebenen Signifikanzniveaus, die Zahl der Elemente das Ablehnbereichs oder die Güte des Tests unter einer vorgegebenen Alternative maximiert werden. Die zugrundeliegenden Optimierungsprobleme werden mit einem Branch and Bound Algorithmus gelöst. Alternativ ist auch eine Formulierung als binäres lineares Programm und dessen Lösung möglich. Im Gegensatz zu vergleichbaren, in der Literatur beschriebenen Methoden wird dabei eine zusätzliche Bedingung eingeführt, die sicherstellt, dass die resultierenden Ablehnbereiche zusammenhängende Regionen sind. Diese Bedingung ist für eine eindeutige Interpretation der Testergebnisse notwendig. Sie bewirkt darüberhinaus, dass die grundsätzlich hohe Güte der vorgeschlagenen Tests auch in multiple Testprozeduren, die aus diesen durch Anwendung des Abschlusstestprinzips gebildet werden, erhalten bleibt. In einem zweiten Ansatz werden die marginalen diskreten Verteilungen der Teststatistiken benutzt, um optimal gewichtete Bonferroni Tests zu definieren, die Optimalitätskriterien in Analogie zu jenen der multivariaten Ablehnbereiche erfüllen. Zusätzlich werden sogenannte Greedy Algorithmen erstellt, die auf Basis diskreter gemeinsamer oder marginaler Verteilungen der Teststatistiken Tests mit nahezu optimaler Ausschöpfung des Signifikanzniveaus und robusten Güteeigenschaften ergeben. Die numerische Untersuchung der Güte der vorgeschlagenen Methoden für zahlreiche Szenarien mit zwei oder drei binären Zielgrößen zeigt, dass für das gestellte Testproblem die hier vorgeschlagenen Methoden existierenden Verfahren wie der Bonferroni-Holm Prozedur oder dem minP Test überlegen sind. Die Anwendung der hier entwickelten optimalen exakten Tests wird daher empfohlen, um die Information aus multiplen binären Zielgrößen in klinischen Studien mit geringer Fallzahl bestmöglich zu nutzen.There is increasing interest in studying cures for rare diseases and investigating the effect of medical treatments in specific small populations. In this setting, the number of patients that can be recruited to an individual trial is inherently small. The major challenges in the statistical analysis of data from such small trials are the overall low amount of information and the potential lack of accuracy of asymptotic approximations that are, however, the fundament of many standard inference methods. Further, in many trials more than one outcome variable is observed to assess the effect of a treatment. This can provide an advantageous increase of information, at the same time the family wise type I error rate needs to be controlled, that means the statistical analysis must account for the increased probability of observing a seemingly strong effect in at least one outcome variable in the absence of a true treatment effect. The aim of this work is to provide in some sense optimal exact hypothesis tests for the comparison of two treatment groups with respect to the success rates of multiple binary endpoints with strict control of the family wise type I error rate. In a first approach, the discrete permutation joint distribution of marginal Fisher's exact test statistics is used to define multivariate rejection regions such that either the exhaustion of the nominal significance level, the number of elements in the rejection region or the power under a specific alternative is maximized. In contrast to previous similar proposals in the literature, an additional constraint is introduced that ensures that the rejection regions are contiguous sets. This constraint is required for unambiguous interpretation of the test results and to provide powerful multiple testing procedures by virtue of the closed testing principle. The underlying optimization problem is solved by a branch and bound algorithm or, alternatively, by linear integer programming. In a second approach, the marginal distributions of discrete statistics are used to define optimally weighted Bonferroni tests, meeting optimization criteria analogous to those for the multivariate rejection regions. In addition, greedy algorithms that result in tests with close to optimal exhaustion of the nominal level and robust power properties are derived, using either multivariate joint distributions or marginal distributions of discrete test statistics. A numeric study of the power of the proposed methods under a wide range of scenarios with two and three binary endpoints shows that the proposed methods outperform existing procedures such as the Bonferroni-Holm adjustment and the minP test. The application of optimal exact tests is recommended to make best use of the information contained in multiple binary endpoints in the small sample setting
    corecore