8 research outputs found

    Fax +41 61 306 12 34 E-Mail karger@karger

    Get PDF
    against later initiation of treatment. Most studies were of limited quality. We found indications of short-term positive effects from language therapies in children with SLI. Longterm outcomes were not investigated. No evidence supporting the advantage of earlier treatment initiation was identified. Conclusions: The benefit of population-based language screening of preschool children with SLI is not proven. Controlled screening studies are therefore necessary. For Germany, the accuracy of existing diagnostic instruments has not yet been sufficiently examined

    On estimands and the analysis of adverse events in the presence of varying follow-up times within the benefit assessment of therapies

    Full text link
    The analysis of adverse events (AEs) is a key component in the assessment of a drug's safety profile. Inappropriate analysis methods may result in misleading conclusions about a therapy's safety and consequently its benefit-risk ratio. The statistical analysis of AEs is complicated by the fact that the follow-up times can vary between the patients included in a clinical trial. This paper takes as its focus the analysis of AE data in the presence of varying follow-up times within the benefit assessment of therapeutic interventions. Instead of approaching this issue directly and solely from an analysis point of view, we first discuss what should be estimated in the context of safety data, leading to the concept of estimands. Although the current discussion on estimands is mainly related to efficacy evaluation, the concept is applicable to safety endpoints as well. Within the framework of estimands, we present statistical methods for analysing AEs with the focus being on the time to the occurrence of the first AE of a specific type. We give recommendations which estimators should be used for the estimands described. Furthermore, we state practical implications of the analysis of AEs in clinical trials and give an overview of examples across different indications. We also provide a review of current practices of health technology assessment (HTA) agencies with respect to the evaluation of safety data. Finally, we describe problems with meta-analyses of AE data and sketch possible solutions

    Reporting of loss to follow-up information in randomised controlled trials with time-to-event outcomes: a literature survey

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To assess the reporting of loss to follow-up (LTFU) information in articles on randomised controlled trials (RCTs) with time-to-event outcomes, and to assess whether discrepancies affect the validity of study results.</p> <p>Methods</p> <p>Literature survey of all issues of the BMJ, Lancet, JAMA, and New England Journal of Medicine published between 2003 and 2005. Eligible articles were reports of RCTs including at least one Kaplan-Meier plot. Articles were classified as "assessable" if sufficient information was available to assess LTFU. In these articles, LTFU information was derived from Kaplan-Meier plots, extracted from the text, and compared. Articles were then classified as "consistent" or "not consistent". Sensitivity analyses were performed to assess the validity of study results.</p> <p>Results</p> <p>319 eligible articles were identified. 187 (59%) were classified as "assessable", as they included sufficient information for evaluation; 140 of 319 (44%) presented consistent LTFU information between the Kaplan-Meier plot and text. 47 of 319 (15%) were classified as "not consistent". These 47 articles were included in sensitivity analyses. When various imputation methods were used, the results of a chi<sup>2</sup>-test applied to the corresponding 2 Ă— 2 table changed and hence were not robust in about half of the studies.</p> <p>Conclusions</p> <p>Less than half of the articles on RCTs using Kaplan-Meier plots provide assessable and consistent LTFU information, thus questioning the validity of the results and conclusions of many studies presenting survival analyses. Authors should improve the presentation of both Kaplan-Meier plots and LTFU information, and reviewers of study publications and journal editors should critically appraise the validity of the information provided.</p

    Developing search strategies for clinical practice guidelines in SUMSearch and Google Scholar and assessing their retrieval performance

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Information overload, increasing time constraints, and inappropriate search strategies complicate the detection of clinical practice guidelines (CPGs). The aim of this study was to provide clinicians with recommendations for search strategies to efficiently identify relevant CPGs in SUMSearch and Google Scholar.</p> <p>Methods</p> <p>We compared the retrieval efficiency (retrieval performance) of search strategies to identify CPGs in SUMSearch and Google Scholar. For this purpose, a two-term GLAD (GuideLine And Disease) strategy was developed, combining a defined CPG term with a specific disease term (MeSH term). We used three different CPG terms and nine MeSH terms for nine selected diseases to identify the most efficient GLAD strategy for each search engine. The retrievals for the nine diseases were pooled. To compare GLAD strategies, we used a manual review of all retrievals as a reference standard. The CPGs detected had to fulfil predefined criteria, e.g., the inclusion of therapeutic recommendations. Retrieval performance was evaluated by calculating so-called diagnostic parameters (sensitivity, specificity, and "Number Needed to Read" [NNR]) for search strategies.</p> <p>Results</p> <p>The search yielded a total of 2830 retrievals; 987 (34.9%) in Google Scholar and 1843 (65.1%) in SUMSearch. Altogether, we found 119 unique and relevant guidelines for nine diseases (reference standard). Overall, the GLAD strategies showed a better retrieval performance in SUMSearch than in Google Scholar. The performance pattern between search engines was similar: search strategies including the term "guideline" yielded the highest sensitivity (SUMSearch: 81.5%; Google Scholar: 31.9%), and search strategies including the term "practice guideline" yielded the highest specificity (SUMSearch: 89.5%; Google Scholar: 95.7%), and the lowest NNR (SUMSearch: 7.0; Google Scholar: 9.3).</p> <p>Conclusion</p> <p>SUMSearch is a useful tool to swiftly gain an overview of available CPGs. Its retrieval performance is superior to that of Google Scholar, where a search is more time consuming, as substantially more retrievals have to be reviewed to detect one relevant CPG. In both search engines, the CPG term "guideline" should be used to obtain a comprehensive overview of CPGs, and the term "practice guideline" should be used if a less time consuming approach for the detection of CPGs is desired.</p

    Der Likelihood-Quotienten-Test fĂĽr geordnete Hypothesen in Nichtunterlegenheitsstudien

    No full text
    In klinischen Studien zur Evaluierung neuer Therapien wird die Wirksamkeit einer neuen Therapie gewöhnlich durch ihre Überlegenheit gegenüber einem Plazebo oder einem Standard belegt. In den letzten Jahren hat sich ein neuer Studientyp gebildet, der die Nicht-Unterlegenheit einer neuen Therapie gegenüber einem etablierten Standard analysiert. In dieser Arbeit sind neue statistische Tests, basierend auf dem Likelihood-Quotienten-Prinzip, entwickelt worden. Der Schwerpunkt lag hierbei auf dem Vergleich von zwei und drei Binomialverteilungen. Für glatte Funktionen, die die Hypothesengrenzen spezifizieren, sind die Asymptotiken hergeleitet worden. Zusätzlich wurde eine unbedingte exakte Version des Likelihood-Quotienten-Tests entwickelt. Dieser Ansatz ist mit verschiedenen Verfahren, die in der Literatur vorgeschlagen wurden, numerisch verglichen worden. Es hat sich herausgestellt, dass dieser Test für die gängigen Hypothesen eine höhere Trennschärfe als die gewöhnlichen Verfahren aufweist verwendet

    The Peto odds ratio viewed as a new effect measure

    No full text
    Meta-analysis has generally been accepted as a fundamental tool for combining effect estimates from several studies. For binary studies with rare events, the Peto odds ratio (POR) method has become the relative effect estimator of choice. However, the POR leads to biased estimates for the OR when treatment effects are large or the group size ratio is not balanced. The aim of this work is to derive the limit of the POR estimator for increasing sample size, to investigate whether the POR limit is equal to the true OR and, if this is not the case, in which situations the POR limit is sufficiently close to the OR. It was found that the derived limit of the expected POR is not equivalent to the OR, because it depends on the group size ratio. Thus, the POR represents a different effect measure. We investigated in which situations the POR is reasonably close to the OR and found that this depends only slightly on the baseline risk within the range (0.001; 0.1) yet substantially on the group size ratio and the effect size itself. We derived the maximum effect size of the POR for different group size ratios and tolerated amounts of bias, for which the POR method results in an acceptable estimator of the OR. We conclude that the limit of the expected POR can be regarded as a new effect measure, which can be used in the presented situations as a valid estimate of the true OR. Copyright (C) 2014 John Wiley & Sons, Ltd
    corecore