94 research outputs found

    Evaluierung und Verbesserung von Fallzahlrekalkulation in adaptiven klinischen Studiendesigns

    Get PDF
    A valid sample size calculation is a key aspect for ethical medical research. While the sample size must be large enough to detect an existing relevant effect with sufficient power, it is at the same time crucial to include as few patients as possible to minimize exposure to study related risks and time to potential market approval. Different parameter assumptions, like the expected effect size and the outcome’s variance, are required to calculate the sample size. However, even with high medical knowledge it is often not easy to make reasonable assumptions on these parameters. Published results from the literature may vary or may not be comparable to the current situation. Adaptive designs offer a possible solution to deal with those planning difficulties. At an interim analysis, the standardized treatment effect is estimated and used to adapt the sample size. In the literature, there exists a variety of strategies for recalculating the sample size. However, the definition of performance criteria for those strategies is complex since the second stage sample size is a random variable. It is also known since long that most existing sample size recalculation strategies have major shortcomings, such as a high variability in the recalculated sample size. Within Thesis Article 1, me and my coauthors developed a new performance score for comparing different sample size recalculation rules in a fair and transparent manner. This performance score is the basis to develop improved sample size recalculation strategies in a second step. In Thesis Article 2, me and my supervisor propose smoothing corrections to be combined with existing sample size recalculation rules to reduce the variability. Thesis Article 3 deals with the determination of the second stage sample size as the numerical solution of a constrained optimization problem, which is solved by a new R-package named adoptr. To illustrate the relation of the three Thesis Articles, all new approaches are applied to a clinical trial example to show the methods’ benefits in comparison to an established sample size recalculation strategy. The global aim of defining high-performance sample size recalculation rules was approached considerably by my work. The performance of adaptive designs with sample size recalculation can now be compared by means of a single comprehensive score. Moreover, our new smoothing corrections define one possibility to improve an existing sample size recalculation rule with respect to this new performance score. The new software further allows to directly determine an optimal second stage sample size with respect to predefined optimality criteria. I was able to reduce methodological shortcomings in sample size recalculation by four aspects: providing new methods for 1) performance evaluation, 2) performance improvement, 3) performance optimization and 4) software solutions. In addition, I illustrate how these methods can be combined and applied to a clinical trial example.Hintergrund Eine valide Fallzahlberechnung ist ein zentraler Aspekt für ethische medizinische Forschung. Während die Fallzahl groß genug sein muss, um einen vorliegenden relevanten Effekt mit genügend großer Power zu entdecken, ist es gleichzeitig wichtig, so wenig Patient*innen wie möglich einzuschließen, um studienbezogene Risiken sowie die Zeit bis zur Marktzulassung zu minimieren. Verschiedene Parameterannahmen, wie die erwartete Effektgröße und die Varianz des Endpunktes, werden benötigt, um die Fallzahl zu berechnen. Auch mit hoher medizinischer Expertise ist es häufig nicht einfach, die zugrundliegenden Parameterannahmen zu treffen. Publizierte Ergebnisse aus der Literatur können variieren oder auf die aktuelle Situation nicht übertragbar sein. Adaptive Designs sind eine Möglichkeit, um mit diesen Planungsunsicherheiten umzugehen. Zur Zwischenanalyse wird der Behandlungseffekt geschätzt und genutzt, um die Fallzahl anzupassen. In der Literatur gibt es eine Vielzahl an Strategien die Fallzahl anzupassen. Die Definition von Beurteilungskriterien dieser Strategien ist jedoch komplex, da die Fallzahl der zweiten Stufe eine Zufallsvariable ist. Hinzu kommt, dass viele existierende Fallzahlrekalkulations-Strategien Defizite haben, beispielsweise eine hohe Variabilität in der rekalkulierten Fallzahl. Methoden Im Promotionsartikel 1 entwickelten meine Koautor*innen und ich einen neuen Performance- Score für einen fairen und transparenten Vergleich von Fallzahlrekalkulations-Strategien. Dieser Performance-Score diente im zweiten Schritt als Basis, um verbesserte Fallzahlrekalkulations-Strategien zu entwickeln. Hierfür schlugen meine Betreuerin und ich im Promotionsartikel 2 Smoothing-Korrekturen zur Varianzreduktion vor, die mit bereits existierenden Fallzahlrekalkulations-Strategien kombiniert werden können. Im Promotionsartikel 3 wurde die Fallzahl der zweiten Stufe als numerische Lösung eines Optimierungsproblems aufgefasst, welche durch das neue R-Paket adoptr berechnet wird. Um den Zusammenhang der drei zugrundeliegenden Artikel zu illustrieren, wurden die neuen Methoden auf ein klinisches Studienbeispiel angewandt und ihre Vorteile gegenüber einer etablierten Fallzahlrekalkulations-Strategie erläutert. Ergebnisse Das übergeordnete Ziel qualitativ hochwertige Fallzahlrekalkulations-Strategien zu definieren, wurde durch meine Arbeit beträchtlich vorangetrieben. Die Performance von adaptiven Designs mit Fallzahlrekalkulation kann nun durch einen umfassenden Score beurteilt werden. Darüberhinaus stellen die neuen Smoothing-Korrekturen eine Möglichkeit dar, um Fallzahlrekalkulations-Strategien hinsichtlich des neuen Performance-Scores zu verbessern. Die neue Software erlaubt darüber hinaus, eine optimale Fallzahl der zweiten Stufe in Bezug auf vorab definierte Optimalitätskriterien zu bestimmen. Schlussfolgerungen Im Rahmen dieser Arbeit habe ich durch vier Aspekte dazu beigetragen, methodische Defizite im Bereich der Fallzahlrekalkulation zu reduzieren: 1) Performance-Bewertung, 2) Performance-Verbesserung, 3) Performance-Optimierung und 4) Software-Lösungen. Zusätzlich wird illustriert wie diese Methoden kombiniert und auf ein klinisches Studienbeispiel angewandt werden können

    A new conditional performance score for the evaluation of adaptive group sequential designs with sample size recalculation

    Get PDF
    In standard clinical trial designs, the required sample size is fixed in the planning stage based on initial parameter assumptions. It is intuitive that the correct choice of the sample size is of major importance for an ethical justification of the trial. The required parameter assumptions should be based on previously published results from the literature. In clinical practice, however, historical data often do not exist or show highly variable results. Adaptive group sequential designs allow a sample size recalculation after a planned unblinded interim analysis in order to adjust the sample size during the ongoing trial. So far, there exist no unique standards to assess the performance of sample size recalculation rules. Single performance criteria commonly reported are given by the power and the average sample size; the variability of the recalculated sample size and the conditional power distribution are usually ignored. Therefore, the need for an adequate performance score combining these relevant performance criteria is evident. To judge the performance of an adaptive design, there exist two possible perspectives, which might also be combined: Either the global performance of the design can be addressed, which averages over all possible interim results, or the conditional performance is addressed, which focuses on the remaining performance conditional on a specific interim result. In this work, we give a compact overview of sample size recalculation rules and performance measures. Moreover, we propose a new conditional performance score and apply it to various standard recalculation rules by means of Monte-Carlo simulations

    Regenbogenfamilien - Sind homosexuelle Paare Eltern zweiter Klasse?: Kurzfassung

    Get PDF
    Universität Erfurt, Kurzfassung der Bachelorarbeit, erstellt 08/201

    Regenbogenfamilien - Sind homosexuelle Paare Eltern zweiter Klasse?

    Get PDF
    Univ. Erfurt, Bachelorarbeit Neben dem konventionellen Familienmodell besteht in unserer heutigen Gesellschaft eine Vielzahl alternativer Familienkonzepte. Während beispielsweise Patchworkfamilien und alleinerziehende Elternteile bereits große Akzeptanz genießen, ist das Modell der Regenbogenfamilie bislang nur wenigen ein Begriff und kämpft scheinbar noch immer um seine Gleichstellung mit traditionellen Familienbildern. Regenbogenfamilien, die sich aus einem oder zwei zusammenlebenden homosexuellen Elternteilen zusammensetzen, lastet der Ruf an, für Kinder keine adäquate Familienform darzustellen, was sich allein auf der Sexualität der Eltern begründet. Im Rahmen unserer Bachelorarbeit gingen wir diesen Vorurteilen nach und führten eine qualitative Studie dazu durch

    The adoptr Package: Adaptive Optimal Designs for Clinical Trials in R

    Get PDF
    Even though adaptive two-stage designs with unblinded interim analyses are becoming increasingly popular in clinical trial designs, there is a lack of statistical software to make their application more straightforward. The package adoptr fills this gap for the common case of two-stage one- or two-arm trials with (approximately) normally distributed outcomes. In contrast to previous approaches, adoptr optimizes the entire design upfront which allows maximal efficiency. To facilitate experimentation with different objective functions, adoptr supports a flexible way of specifying both (composite) objective scores and (conditional) constraints by the user. Special emphasis was put on providing measures to aid practitioners with the validation process of the package

    Improving sample size recalculation in adaptive clinical trials by resampling

    Get PDF
    Sample size calculations in clinical trials need to be based on profound parameter assumptions. Wrong parameter choices may lead to too small or too high sample sizes and can have severe ethical and economical consequences. Adaptive group sequential study designs are one solution to deal with planning uncertainties. Here, the sample size can be updated during an ongoing trial based on the observed interim effect. However, the observed interim effect is a random variable and thus does not necessarily correspond to the true effect. One way of dealing with the uncertainty related to this random variable is to include resampling elements in the recalculation strategy. In this paper, we focus on clinical trials with a normally distributed endpoint. We consider resampling of the observed interim test statistic and apply this principle to several established sample size recalculation approaches. The resulting recalculation rules are smoother than the original ones and thus the variability in sample size is lower. In particular, we found that some resampling approaches mimic a group sequential design. In general, incorporating resampling of the interim test statistic in existing sample size recalculation rules results in a substantial performance improvement with respect to a recently published conditional performance score

    Statistical model building: Background “knowledge” based on inappropriate preselection causes misspecification

    Get PDF
    Background: Statistical model building requires selection of variables for a model depending on the model's aim. In descriptive and explanatory models, a common recommendation often met in the literature is to include all variables in the model which are assumed or known to be associated with the outcome independent of their identification with data driven selection procedures. An open question is, how reliable this assumed "background knowledge" truly is. In fact, "known" predictors might be findings from preceding studies which may also have employed inappropriate model building strategies. Methods: We conducted a simulation study assessing the influence of treating variables as "known predictors" in model building when in fact this knowledge resulting from preceding studies might be insufficient. Within randomly generated preceding study data sets, model building with variable selection was conducted. A variable was subsequently considered as a "known" predictor if a predefined number of preceding studies identified it as relevant. Results: Even if several preceding studies identified a variable as a "true" predictor, this classification is often false positive. Moreover, variables not identified might still be truly predictive. This especially holds true if the preceding studies employed inappropriate selection methods such as univariable selection. Conclusions: The source of "background knowledge" should be evaluated with care. Knowledge generated on preceding studies can cause misspecification

    Citizen science’s transformative impact on science, citizen empowerment and socio-political processes

    Get PDF
    Citizen science (CS) can foster transformative impact for science, citizen empowerment and socio-political processes. To unleash this impact, a clearer understanding of its current status and challenges for its development is needed. Using quantitative indicators developed in a collaborative stakeholder process, our study provides a comprehensive overview of the current status of CS in Germany, Austria and Switzerland. Our online survey with 340 responses focused on CS impact through (1) scientific practices, (2) participant learning and empowerment, and (3) socio-political processes. With regard to scientific impact, we found that data quality control is an established component of CS practice, while publication of CS data and results has not yet been achieved by all project coordinators (55%). Key benefits for citizen scientists were the experience of collective impact (“making a difference together with others”) as well as gaining new knowledge. For the citizen scientists’ learning outcomes, different forms of social learning, such as systematic feedback or personal mentoring, were essential. While the majority of respondents attributed an important value to CS for decision-making, only few were confident that CS data were indeed utilized as evidence by decision-makers. Based on these results, we recommend (1) that project coordinators and researchers strengthen scientific impact by fostering data management and publications, (2) that project coordinators and citizen scientists enhance participant impact by promoting social learning opportunities and (3) that project initiators and CS networks foster socio-political impact through early engagement with decision-makers and alignment with ongoing policy processes. In this way, CS can evolve its transformative impact
    corecore