21 research outputs found

    The group-based social skills training SOSTA-FRA in children and adolescents with high functioning autism spectrum disorder - study protocol of the randomised, multi-centre controlled SOSTA - net trial

    Get PDF
    Background: Group-based social skills training (SST) has repeatedly been recommended as treatment of choice in high-functioning autism spectrum disorder (HFASD). To date, no sufficiently powered randomised controlled trial has been performed to establish efficacy and safety of SST in children and adolescents with HFASD. In this randomised, multi-centre, controlled trial with 220 children and adolescents with HFASD it is hypothesized, that add-on group-based SST using the 12 weeks manualised SOSTA–FRA program will result in improved social responsiveness (measured by the parent rated social responsiveness scale, SRS) compared to treatment as usual (TAU). It is further expected, that parent and self reported anxiety and depressive symptoms will decline and pro-social behaviour will increase in the treatment group. A neurophysiological study in the Frankfurt HFASD subgroup will be performed pre- and post treatment to assess changes in neural function induced by SST versus TAU. Methods/design: The SOSTA – net trial is designed as a prospective, randomised, multi-centre, controlled trial with two parallel groups. The primary outcome is change in SRS score directly after the intervention and at 3 months follow-up. Several secondary outcome measures are also obtained. The target sample consists of 220 individuals with ASD, included at the six study centres. Discussion: This study is currently one of the largest trials on SST in children and adolescents with HFASD worldwide. Compared to recent randomised controlled studies, our study shows several advantages with regard to in- and exclusion criteria, study methods, and the therapeutic approach chosen, which can be easily implemented in non-university-based clinical settings. Trial registration: ISRCTN94863788 – SOSTA – net: Group-based social skills training in children and adolescents with high functioning autism spectrum disorder

    OneArmPhaseTwoStudy: An R Package for Planning, Conducting, and Analysing Single-Arm Phase II Studies

    Get PDF
    In clinical phase II studies, the efficacy of a promising therapy is tested in patients for the first time. Based on the results, it is decided whether the development programme should be stopped or whether the benefit-risk profile is promising enough to justify the initiation of large phase III studies. In oncology, phase II trials are commonly conducted as single-arm trials with planned interim analyses to allow for an early stopping for futility. The specification of an adequate study design that guarantees control of the type I and II error rates is a key task in the planning stage of such a trial. A variety of statistical methods exists which can be used to optimise the planning and analysis of such studies. However, there are currently neither commercial nor non-commercial software tools available that support the practical application of these methods comprehensively. The R package OneArmPhaseTwoStudy was implemented to fill this gap. The package allows determining an adequate study design for the particular situation at hand as well as monitoring the progress of the study and evaluating the results with valid and efficient analyses methods. This article describes the features of the R package and its application

    Confidence regions for treatment effects in subgroups in biomarker stratified designs

    Get PDF
    Subgroup analysis has important applications in the analysis of controlled clinical trials. Sometimes the result of the overall group fails to demonstrate that the new treatment is better than the control therapy, but for a subgroup of patients, the treatment benefit may exist; or sometimes, the new treatment is better for the overall group but not for a subgroup. Hence we are interested in constructing a simultaneous confidence interval for the difference of the treatment effects in a subgroup and the overall group. Subgroups are usually formed on the basis of a predictive biomarker such as age, sex, or some genetic marker. While, for example, age can be detected precisely, it is often only possible to detect the biomarker status with a certain probability. Because patients detected with a positive or negative biomarker may not be truly biomarker positive or negative, responses in the subgroups depend on the treatment therapy as well as on the sensitivity and specificity of the assay used in detecting the biomarkers. In this work, we show how (approximate) simultaneous confidence intervals and confidence ellipsoid for the treatment effects in subgroups can be found for biomarker stratified clinical trials using a normal framework with normally distributed or binary data. We show that these intervals maintain the nominal confidence level via simulations

    An alternative method to analyse the Biomarker-strategy design

    Get PDF
    Recent developments in genomics and proteomics enable the discovery of biomarkers that allow identification of subgroups of patients responding well to a treatment. One currently used clinical trial design incorporating a predictive biomarker is the so-called biomarker strategy design (or marker-based strategy design). Conventionally, the results from this design are analysed by comparing the mean of the biomarker-led arm with the mean of the randomised arm. Several problems regarding the analysis of the data obtained from this design have been identified in the literature. In this paper, we show how these problems can be resolved if the sample sizes in the subgroups fulfil the specified orthogonality condition. We also propose a novel analysis strategy that allows definition of test statistics for the biomarker-by-treatment interaction effect as well as for the classical treatment effect and the biomarker effect. We derive equations for the sample size calculation for the case of perfect and imperfect biomarker assays. We also show that the often used 1:1 randomisation does not necessarily lead to the smallest sample size. Application of the novel method is illustrated using a real data example

    Should the two‐trial paradigm still be the gold standard in drug assessment?

    Get PDF
    Two significant pivotal trials are usually required for a new drug approval by a regulatory agency. This standard requirement is known as the two-trial paradigm. However, several authors have questioned why we need exactly two pivotal trials, what statistical error the regulators are trying to protect against, and potential alternative approaches. Therefore, it is important to investigate these questions to better understand the regulatory decision-making in the assessment of drugs' effectiveness. It is common that two identically designed trials are run solely to adhere to the two-trial rule. Previous work showed that combining the data from the two trials into a single trial (one-trial paradigm) would increase the power while ensuring the same level of type I error protection as the two-trial paradigm. However, this is true only under a specific scenario and there is little investigation on the type I error protection over the whole null region. In this article, we compare the two paradigms by considering scenarios in which the two trials are conducted in identical or different populations as well as with equal or unequal size. With identical populations, the results show that a single trial provides better type I error protection and higher power. Conversely, with different populations, although the one-trial rule is more powerful in some cases, it does not always protect against the type I error. Hence, there is the need for appropriate flexibility around the two-trial paradigm and the appropriate approach should be chosen based on the questions we are interested in

    Clinical trials impacted by the COVID-19 pandemic : adaptive designs to the rescue?

    Get PDF
    Very recently the new pathogen severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was identified and the coronavirus disease 2019 (COVID-19) declared a pandemic by the World Health Organization. The pandemic has a number of consequences for ongoing clinical trials in non-COVID-19 conditions. Motivated by four current clinical trials in a variety of disease areas we illustrate the challenges faced by the pandemic and sketch out possible solutions including adaptive designs. Guidance is provided on (i) where blinded adaptations can help; (ii) how to achieve type I error rate control, if required; (iii) how to deal with potential treatment effect heterogeneity; (iv) how to utilize early read-outs; and (v) how to utilize Bayesian techniques. In more detail approaches to resizing a trial affected by the pandemic are developed including considerations to stop a trial early, the use of group-sequential designs or sample size adjustment. All methods considered are implemented in a freely available R shiny app. Furthermore, regulatory and operational issues including the role of data monitoring committees are discussed

    Evaluation of presumably disease causing SCN1A variants in a cohort of common epilepsy syndromes

    Get PDF
    Objective: The SCN1A gene, coding for the voltage-gated Na+ channel alpha subunit NaV1.1, is the clinically most relevant epilepsy gene. With the advent of high-throughput next-generation sequencing, clinical laboratories are generating an ever-increasing catalogue of SCN1A variants. Variants are more likely to be classified as pathogenic if they have already been identified previously in a patient with epilepsy. Here, we critically re-evaluate the pathogenicity of this class of variants in a cohort of patients with common epilepsy syndromes and subsequently ask whether a significant fraction of benign variants have been misclassified as pathogenic. Methods: We screened a discovery cohort of 448 patients with a broad range of common genetic epilepsies and 734 controls for previously reported SCN1A mutations that were assumed to be disease causing. We re-evaluated the evidence for pathogenicity of the identified variants using in silico predictions, segregation, original reports, available functional data and assessment of allele frequencies in healthy individuals as well as in a follow up cohort of 777 patients. Results and Interpretation: We identified 8 known missense mutations, previously reported as path

    Evaluation of Presumably Disease Causing SCN1A Variants in a Cohort of Common Epilepsy Syndromes

    Get PDF
    A. Palotie on työryhmän jäsen.Objective The SCN1A gene, coding for the voltage-gated Na+ channel alpha subunit NaV1.1, is the clinically most relevant epilepsy gene. With the advent of high-throughput next-generation sequencing, clinical laboratories are generating an ever-increasing catalogue of SCN1A variants. Variants are more likely to be classified as pathogenic if they have already been identified previously in a patient with epilepsy. Here, we critically re-evaluate the pathogenicity of this class of variants in a cohort of patients with common epilepsy syndromes and subsequently ask whether a significant fraction of benign variants have been misclassified as pathogenic. Methods We screened a discovery cohort of 448 patients with a broad range of common genetic epilepsies and 734 controls for previously reported SCN1A mutations that were assumed to be disease causing. We re-evaluated the evidence for pathogenicity of the identified variants using in silico predictions, segregation, original reports, available functional data and assessment of allele frequencies in healthy individuals as well as in a follow up cohort of 777 patients. Results and Interpretation We identified 8 known missense mutations, previously reported as pathogenic, in a total of 17 unrelated epilepsy patients (17/448; 3.80%). Our re-evaluation indicates that 7 out of these 8 variants (p.R27T; p.R28C; p.R542Q; p.R604H; p.T1250M; p.E1308D; p.R1928G; NP_001159435.1) are not pathogenic. Only the p. T1174S mutation may be considered as a genetic risk factor for epilepsy of small effect size based on the enrichment in patients (P = 6.60 x 10(-4); OR = 0.32, fishers exact test), previous functional studies but incomplete penetrance. Thus, incorporation of previous studies in genetic counseling of SCN1A sequencing results is challenging and may produce incorrect conclusions.Peer reviewe

    Estimation of secondary endpoints in two-stage phase II oncology trials

    No full text
    In the development of a new treatment in oncology, phase II trials play a key role. On the basis of the data obtained during phase II, it is decided whether the treatment should be studied further. Therefore, the decision to be made on the basis of the data of a phase II trial must be as accurate as possible. For ethical and economic reasons, phase II trials are usually performed with a planned interim analysis. Furthermore, the decision about stopping or continuing the study is usually based on a short-term outcome like tumor response, whereas secondary endpoints comprise stable disease, progressive disease, toxicity, and/or overall survival. The data obtained in a phase II trial are often analyzed and interpreted by applying the maximum likelihood estimator (MLE) without taking into account the sequential nature of the trial. However, this approach provides biased results and may therefore lead to wrong conclusions. Whereas unbiased estimators for two-stage designs have been derived for the primary endpoint, such estimators are currently not available for secondary endpoints. We present uniformly minimum variance unbiased estimators (UMVUE) for secondary endpoints in two-stage designs that allow stopping for futility (and efficacy). We compare the mean squared error of the UMVUE and the MLE and investigate the efficiency of the UMVUE. A clinical trial example illustrates the application
    corecore