4,750 research outputs found

    Methods for non-proportional hazards in clinical trials: A systematic review

    Full text link
    For the analysis of time-to-event data, frequently used methods such as the log-rank test or the Cox proportional hazards model are based on the proportional hazards assumption, which is often debatable. Although a wide range of parametric and non-parametric methods for non-proportional hazards (NPH) has been proposed, there is no consensus on the best approaches. To close this gap, we conducted a systematic literature search to identify statistical methods and software appropriate under NPH. Our literature search identified 907 abstracts, out of which we included 211 articles, mostly methodological ones. Review articles and applications were less frequently identified. The articles discuss effect measures, effect estimation and regression approaches, hypothesis tests, and sample size calculation approaches, which are often tailored to specific NPH situations. Using a unified notation, we provide an overview of methods available. Furthermore, we derive some guidance from the identified articles. We summarized the contents from the literature review in a concise way in the main text and provide more detailed explanations in the supplement (page 29)

    Maximization of power in randomized clinical trials using the minimization treatment allocation technique

    Get PDF
    Generally the primary goal of randomized clinical trials (RCT) is to make comparisons among two or more treatments hence clinical investigators require the most appropriate treatment allocation procedure to yield reliable results regardless of whether the ultimate data suggest a clinically important difference between the treatments being studied. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the statistical efficiency in detecting treatment effect and its complexity in implementation. Methods: A SAS simulation code was designed for allocating patients into two different treatment groups. Categorical prognostic factors were used together with multi-level response variables and demonstration of how simulation of data can help to determine the power of the minimization technique was carried out using ordinal logistic regression models. Results: Several scenarios were simulated in this study. Within the selected scenarios, increasing the sample size significantly increased the power of detecting the treatment effect. This was contrary to the case when the probability of allocation was decreased. Power did not change when the probability of allocation given that the treatment groups are balanced was increased. The probability of allocation { } k P was seen to be the only one with a significant effect on treatment balance. Conclusion: Maximum power can be achieved with a sample of size 300 although a small sample of size 200 can be adequate to attain at least 80% power. In order to have maximum power, the probability of allocation should be fixed at 0.75 and set to 0.5 if the treatment groups are equally balanced

    Behavioural outcomes of treatment with selective serotonin reuptake inhibitors

    Get PDF
    Mood and anxiety disorders are some of the biggest contributors to morbidity worldwide, and may be lethal. Appropriate treatment is therefore paramount. Antidepressant medications constitute the primary pharmacological treatment for these disorders, with selective serotonin reuptake inhibitors (SSRIs) as the most common type in several Western countries. While developed to treat disorders that increase the risk of violence and suicide, there is concern that SSRI treatment may in itself increase the risk for these behavioural outcomes, especially among young people. The overarching aim of this thesis is therefore to contribute to the understanding of the risks and benefits of treatment with SSRIs in relation to severe behavioural outcomes in different age groups, including when SSRIs are combined with other central nervous system (CNS) drugs. We also document antidepressant prescription patterns in young individuals – the age group where the balance between benefits and risks of antidepressant treatment is least clear. In study I, we described the prevalence of antidepressant use and polypharmacy of CNS drugs with antidepressants over time in children, adolescents, and young adults living in Sweden. We found that, over time, there was an increasing trend in antidepressant use and an increase in the co-prescription of antidepressants with other CNS drugs. We also found that antidepressant users had higher likelihood than population controls of collecting other CNS drug classes additionally to antidepressants. In Study II, we investigated the hazard of conviction for violent crimes during treatment with SSRIs, including in different time periods since start and end of treatment. In a follow-up of up to 8 years, we found that the hazard of violent crime was statistically significantly elevated throughout treatment periods, and for up to 12 weeks after the end of treatment. This was true in youths as well as older adults, which adds to prior research that has found elevated risk of aggression outcomes during SSRI treatment in young adults but not older individuals. In Study III, we explored the incidence rate of suicide attempts or deaths (suicidal behaviour) in time periods before and after initiation of SSRI treatment. We found that the month immediately prior to SSRI treatment initiation was associated with the greatest incidence rate of suicidal behaviour, that treatment periods up to one year after treatment initiation were associated with lower incidence rate compared to the month immediately before initiation, and that the incidence rate gradually decreased over treatment time. However, all treated periods had higher incidence rates than the month one year before treatment start. These patterns were consistent across age categories, including among children and young adults. In Study IV, we applied a data-driven screening approach to compare the incidence rate of suicidal behaviour in periods after and before initiation of additional CNS drugs during continuous SSRI treatment. We found several drugs that were associated with statistically significantly reduced incidence rate of suicidal behaviour when initiated during SSRI treatment, and only two associated with increased risk of suicidal behaviour. We found no evidence of harmful effects of combining SSRIs with additional CNS drugs. Many of the signals of reduced suicidal behaviour correspond to prior evidence; novel signals could be further investigated to evaluate the use of these drugs concurrently with SSRI treatment. In conclusion, the presented thesis has documented: the increasing prevalence of antidepressant use and polypharmacy of antidepressants with other CNS drugs in young individuals resident in Sweden; the associations between SSRI use and violent crime and suicidal behaviour; and the impact of initiating other CNS drugs during SSRI treatment on the risk for suicidal behaviour. The findings are expected to help guide future research and clinical decision making

    Impact of gene expression profiling tests on breast cancer outcomes

    Get PDF
    prepared for Agency for Healthcare Research and Quality, U.S. Dept. of Health and Human Services ; prepared by the Johns Hopkins University Evidence-based Practice Center ; investigators, Luigi Marchionni ... [et al.]."Contract No. 290-02-0018.""January 2008.""The Agency for Healthcare Research and Quality (AHRQ), through its Evidence-Based Practice Centers (EPCs), sponsors the development of evidence reports and technology assessments to assist public- and private-sector organizations in their efforts to improve the quality of health care in the United States. The Centers for Disease Control and Prevention (CDC) requested and provided funding for this report. The reports and assessments provide organizations with comprehensive, science-based information on common, costly medical conditions and new health care technologies. The EPCs systematically review the relevant scientific literature on topics assigned to them by AHRQ and conduct additional analyses when appropriate prior to developing their reports and assessments." - p. iiiAlso available via the World Wide Web.Includes bibliographical references (p. 101-105).Marchionni L, Wilson RF, Marinopoulos SS, Wolff AC, Parmigiani G, Bass EB, Goodman SN. Impact of Gene Expression Profiling Tests on Breast Cancer Outcomes. Evidence Report/Technology Assessment No. 160. (Prepared by The Johns Hopkins University Evidencebased Practice Center under contract No. 290-02-0018). AHRQ Publication No. 08-E002. Rockville, MD: Agency for Healthcare Research and Quality. January 2008

    CLINICAL OUTCOMES ASSOCIATED WITH TIME TO ANTIMICROBIAL THERAPY CHANGE FROM VANCOMYCIN TO DAPTOMYCIN IN STAPHYLOCOCCAL BACTEREMIA

    Get PDF
    Background: Staphylococcus aureus is an aerobic, Gram positive commensal organism that is capable of causing a wide spectrum of disease. This study contributes to previously published literature regarding daptomycin versus vancomycin use in S. aureus bacteremia (SAB). Methods: Adult patients admitted between 2010 and 2014, billed for ICD-9 code V09.0, 038.11, 038.12, 041.11, or 041.12, and received vancomycin and daptomycin were included in this retrospective analysis. Patients were stratified by time to change in antibiotics from vancomycin to daptomycin to the early switch (1-3 days), intermediate switch (4-7 days), or late switch (8 days or later) group. The primary outcome was treatment failure defined as 30-day recurrence, 60-day all-cause mortality, and 90-day all-cause readmission. Results: 193 patients were enrolled in the final cohort. The overall treatment failure rate was 18% with no differences between early switch, intermediate switch, and late switch (P=0.72) groups. Independent predictors of treatment success were length of stay (OR=1.035) and time to positive culture (OR=0.961). Conclusions: Results of this study did not demonstrate a difference in treatment failure based on time to switch from vancomycin to daptomycin. Future research should focus on optimizing use of vancomycin and daptomycin and medical management of SAB

    Prediction-driven decision rules, RCT design and survival analysis

    Get PDF
    Predictions are becoming more and more a part of our lives, and they are becoming increasingly useful in medical science as the science evolves. Increased understanding of disease and its treatments allows us to use predictions based on predictive biomarker signatures to optimize treatment outcomes for increasingly granular subject groups. One such potential use is in the field of HIV treatment monitoring. In resource-limited regions where regular testing for HIV treatment failure is not always possible, pooled testing methods can reduce the burden of regular testing for all infected. Incorporating predictions to choose who is individually tested based on pooled test results is a way to increase the efficiency of such methods, the treatment being the individual testing versus pooled testing only. The use of biomarker-guided treatment decision rules, or prediction-driven decision rules, can be informal or formally well-defined. For a well-defined prediction-driven decision rule to be implemented, it must first be rigorously tested for efficacy based on a comparison against the standard of care. The definition of standard of care and thus, the definition of clinical utility, depends heavily on the treatment setting. Poorly defining clinical utility can result in great bias, potentially leading to implementing unnecessary prediction-driven decision rules. Formal prediction-driven decision rules are currently most applied in the disease area of cancer. Rigorous testing of these rules is often conducted through RCTs, specifically group sequential RCTs, utilizing a survival endpoint. It is important to understand the analysis of survival data in order to ensure the appropriate analysis methods for such data. Confidence bands for survival estimates over time should be constructed to have nominal coverage rates, and analysis methods like RMST should be understood to allow for rigorous testing of differences when proportional hazards assumptions are not met. Developing prediction-driven decision rules in the form of pooled testing methods for HIV treatment failure, identifying an RCT trial design(s) capable of rigorously evaluating these prediction-driven decision rules, and studying survival analysis methods capable of analyzing the data from such RCTs, whether proportional hazards holds or not, are the subjects of this dissertation

    Recent progresses in outcome-dependent sampling with failure time data

    Get PDF
    An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design

    Prevalence and clinical relevance of helminth co-infections among tuberculosis patients in urban Tanzania

    Get PDF
    Helminth infections can negatively affect the immunologic host control, which may increase the risk of progression from latent Mycobacterium tuberculosis infection to tuberculosis (TB) disease and alter the clinical presentation of TB. We assessed the prevalence and determined the clinical relevance of helminth co-infection among TB patients and household contact controls in urban Tanzania.; Between November 2013 and October 2015, we enrolled adult (≥18 years) sputum smear-positive TB patients and household contact controls without TB during an ongoing TB cohort study in Dar es Salaam, Tanzania. We used Baermann, FLOTAC, Kato-Katz, point-of-care circulating cathodic antigen, and urine filtration to diagnose helminth infections. Multivariable logistic regression models with and without random effects for households were used to assess for associations between helminth infection and TB.; A total of 597 TB patients and 375 household contact controls were included. The median age was 33 years and 60.2% (585/972) were men. The prevalence of any helminth infection among TB patients was 31.8% (190/597) and 25.9% (97/375) among controls. Strongyloides stercoralis was the predominant helminth species (16.6%, 161), followed by hookworm (9.0%, 87) and Schistosoma mansoni (5.7%, 55). An infection with any helminth was not associated with TB (adjusted odds ratio (aOR) 1.26, 95% confidence interval (CI): 0.88-1.80, p = 0.22), but S. mansoni infection was (aOR 2.15, 95% CI: 1.03-4.45, p = 0.040). Moreover, S. mansoni infection was associated with lower sputum bacterial load (aOR 2.63, 95% CI: 1.38-5.26, p = 0.004) and tended to have fewer lung cavitations (aOR 0.41, 95% CI: 0.12-1.16, p = 0.088).; S. mansoni infection was an independent risk factor for active TB and altered the clinical presentation in TB patients. These findings suggest a role for schistosomiasis in modulating the pathogenesis of human TB. Treatment of helminths should be considered in clinical management of TB and TB control programs

    Building reliable evidence from real-world data: methods, cautiousness and recommendations

    Get PDF
    Routinely stored information on healthcare utilisation in everyday clinical practice has proliferated over the past several decades. There is, however, some reluctance on the part of many health professionals to use observational data to support healthcare decisions, especially when data are derived from large databases. Challenges in conducting observational studies based on electronic databases include concern about the adequacy of study design and methods to minimise the effect of both misclassifications (in the absence of direct assessments of exposure and outcome validity) and confounding (in the absence of randomisation). This paper points out issues that may compromise the validity of such studies, and approaches to managing analytic challenges. First, strategies of sampling within a large cohort, as an alternative to analysing the full cohort, will be presented. Second, methods for controlling outcome and exposure misclassifications will be described. Third, several techniques that take into account both measured and unmeasured confounders will also be presented. Fourth, some considerations regarding random uncertainty in the framework of observational studies using healthcare utilisation data will be discussed. Finally, some recommendations for good research practice are listed in this paper. The aim is to provide researchers with a methodological framework, while commenting on the value of new techniques for more advanced users

    Mortality, substance use disorder and cardiovascular health care in persons with severe mental illness

    Get PDF
    Background and aims - In the Nordic countries, people with severe mental illness die 15-20 years younger than others. Substance use and higher level of undiagnosed/untreated somatic diseases contribute to this disparity. We aimed to investigate; (i) mortality among people with schizophrenia and/or substance use disorder, with emphasis on the impact of a dual diagnosis; (ii) whether people with schizophrenia or bipolar disorder had higher odds of not being diagnosed with cardiovascular disease prior to cardiovascular death, and; (iii) whether people with schizophrenia or bipolar disorder had equal prevalence of diagnostic testing and treatment of cardiovascular disease prior to cardiovascular death as people without such disorders. Methods - We calculated standardized mortality ratios in a 7-year open cohort study including all residents of Norway aged 20-79 with schizophrenia and/or substance use disorders diagnosed in specialized care (i). We used multivariate logistic (ii) and log-binomial regression (iii) to study uptake of CVD-related health care services in residents aged 18 and above. Results - We found a four-fold (schizophrenia only) to seven-fold (substance use disorder with or without schizophrenia) increased mortality compared to the general population, implicating that five out of six persons with schizophrenia and/or substance use disorder died prematurely. We also found that people with schizophrenia and women with bipolar disorder were more likely to die from undiagnosed cardiovascular disease. They also had lower prevalence of specialized cardiovascular examinations and invasive cardiovascular treatment prior to cardiovascular death, compared to individuals without schizophrenia or bipolar disorder. We found no difference in uptake of invasive cardiovascular treatment in those diagnosed with cardiovascular disease prior to death. Conclusion - The large mortality gap between persons with severe mental illness and the general population highlights the need of securing proper access to specialized somatic care, and a more effective prevention of deaths from unnatural causes in this group
    • …
    corecore