128 research outputs found

    Power analysis of longitudinal studies with piecewise linear growth and attrition

    Get PDF
    In longitudinal research, the development of some outcome variable(s) over time (or age) is studied. Such relations are not necessarily smooth, and piecewise growth models may be used to account for differential growth rates before and after a turning point in time. Such models have been well developed, but the literature on power analysis for these models is scarce. This study investigates the power needed to detect differential growth for linear-linear piecewise growth models in further detail while taking into account the possibility of attrition. Attrition is modeled using the Weibull survival function, which allows for increasing, decreasing or constant attrition across time. Furthermore, this work takes into account the realistic situation where subjects do not necessarily have the same turning point. A multilevel mixed model is used to model the relation between time and outcome, and to derive the relation between sample size and power. The required sample size to achieve a desired power is smallest when the turning points are located halfway through the study and when all subjects have the same turning point. Attrition has a diminishing effect on power, especially when the probability of attrition is largest at the beginning of the study. An example on alcohol use during middle and high school shows how to perform a power analysis. The methodology has been implemented in a Shiny app to facilitate power calculations for future studies

    Optimal allocation to treatment sequences in individually randomized stepped-wedge designs with attrition

    Get PDF
    Background/aims: The stepped-wedge design has been extensively studied in the setting of the cluster randomized trial, but less so for the individually randomized trial. This article derives the optimal allocation of individuals to treatment sequences. The focus is on designs where all individuals start in the control condition and at the beginning of each time period some of them cross over to the intervention, so that at the end of the trial all of them receive the intervention. Methods: The statistical model that takes into account the nesting of repeated measurements within subjects is presented. It is also shown how possible attrition is taken into account. The effect of the intervention is assumed to be sustained so that it does not change after the treatment switch. An exponential decay correlation structure is assumed, implying that the correlation between any two time point decreases with the time lag. Matrix algebra is used to derive the relation between the allocation of units to treatment sequences and the variance of the treatment effect estimator. The optimal allocation is the one that results in smallest variance. Results: Results are presented for three to six treatment sequences. It is shown that the optimal allocation highly depends on the correlation parameter (Formula presented.) and attrition rate (Formula presented.) between any two adjacent time points. The uniform allocation, where each treatment sequence has the same number of individuals, is often not the most efficient. For (Formula presented.) and (Formula presented.), its efficiency relative to the optimal allocation is at least 0.8. It is furthermore shown how a constrained optimal allocation can be derived in case the optimal allocation is not feasible from a practical point of view. Conclusion: This article provides the methodology for designing individually randomized stepped-wedge designs, taking into account the possibility of attrition. As such it helps researchers to plan their trial in an efficient way. To use the methodology, prior estimates of the degree of attrition and intraclass correlation coefficient are needed. It is advocated that researchers clearly report the estimates of these quantities to help facilitate planning future trials

    Власть и управление

    Get PDF
    В современных условиях все больше внимания уделяется поискам оптимальной управленческой структуры. В статье рассматриваются общие и особенные моменты власти и управления; в частности, рассматриваются проблемы управления человеческими ресурсами предприятия.У сучасних умовах усе більше уваги приділяється пошукам оптимальної управлінської структури. У статті розглядаються загальні й особливі моменти влади і керування; зокрема, розглядаються проблеми керування людськими ресурсами підприємства.In modern conditions of more and more attention it is given searches of optimum administrative structure. In this article the general and especial moments of authority and management are considered; in particular, problems of management are considered by human resources of the enterprise

    Optimal placebo-treatment comparisons in trials with an incomplete within-subject design and heterogeneous costs and variances

    Get PDF
    The aim of a clinical trial is to compare placebo to one or more treatments. The within-subject design is known to be more efficient than the between-subject design. However, in some trials that implement a within-subject design it is not possible to evaluate the placebo and all treatments within each subject. The design then becomes an incomplete within-subject design. An important question is how many subjects should be allocated to each combination of placebo and treatments. This paper studies optimal allocations of subjects in trials with a placebo and two treatments under heterogenous costs and variances. Two optimality criteria that consider the placebo-treatment contrasts simultaneously are considered, and the design is derived under a budgetary constraint. More subjects are allocated to those combinations with higher variances and lower costs. The optimal allocation is compared to the uniform allocation, which allocates equal number of subjects to each placebo and treatment combination, and to the complete within-subject design, where placebo and all treatments are available in each subject. The methodology is illustrated on the basis of an example on consultation time in primary care. A Shiny app is available to facilitate use of the methodology

    Optimal allocation of clusters in stepped wedge designs with a decaying correlation structure

    Get PDF
    The cluster randomized stepped wedge design is a multi-period uni-directional switch design in which all clusters start in the control condition and at the beginning of each new period a random sample of clusters crosses over to the intervention condition. Such designs often use uniform allocation, with an equal number of clusters at each treatment switch. However, the uniform allocation is not necessarily the most efficient. This study derives the optimal allocation of clusters to treatment sequences in the cluster randomized stepped wedge design, for both cohort and cross-sectional designs. The correlation structure is exponential decay, meaning the correlation decreases with the time lag between two measurements. The optimal allocation is shown to depend on the intraclass correlation coefficient, the number of subjects per cluster-period and the cluster and (in the case of a cohort design) individual autocorrelation coefficients. For small to medium values of these autocorrelations those sequences that have their treatment switch earlier or later in the study are allocated a larger proportion of clusters than those clusters that have their treatment switch halfway the study. When the autocorrelation coefficients increase, the clusters become more equally distributed across the treatment sequences. For the cohort design, the optimal allocation is almost equal to the uniform allocation when both autocorrelations approach the value 1. For almost all scenarios that were studied, the efficiency of the uniform allocation is 0.8 or higher. R code to derive the optimal allocation is available online

    Power analysis for cluster randomized trials with continuous co-primary endpoints

    Get PDF
    Pragmatic trials evaluating health care interventions often adopt cluster randomization due to scientific or logistical considerations. Previous reviews have shown that co-primary endpoints are common in pragmatic trials but infrequently recognized in sample size or power calculations. While methods for power analysis based on KK (K2K\geq 2) binary co-primary endpoints are available for CRTs, to our knowledge, methods for continuous co-primary endpoints are not yet available. Assuming a multivariate linear mixed model that accounts for multiple types of intraclass correlation coefficients (endpoint-specific ICCs, intra-subject ICCs and inter-subject between-endpoint ICCs) among the observations in each cluster, we derive the closed-form joint distribution of KK treatment effect estimators to facilitate sample size and power determination with different types of null hypotheses under equal cluster sizes. We characterize the relationship between the power of each test and different types of correlation parameters. We further relax the equal cluster size assumption and approximate the joint distribution of the KK treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power, when the parameters in the multivariate linear mixed model are estimated via the expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method

    A comparison of the multilevel MIMIC model to the multilevel regression and mixed ANOVA model for the estimation and testing of a cross-level interaction effect: A simulation study

    Get PDF
    When observing data on a patient-reported outcome measure in, for example, clinical trials, the variables observed are often correlated and intended to measure a latent variable. In addition, such data are also often characterized by a hierarchical structure, meaning that the outcome is repeatedly measured within patients. To analyze such data, it is important to use an appropriate statistical model, such as structural equation modeling (SEM). However, researchers may rely on simpler statistical models that are applied to an aggregated data structure. For example, correlated variables are combined into one sum score that approximates a latent variable. This may have implications when, for example, the sum score consists of indicators that relate differently to the latent variable being measured. This study compares three models that can be applied to analyze such data: the multilevel multiple indicators multiple causes (ML-MIMIC) model, a univariate multilevel model, and a mixed analysis of variance (ANOVA) model. The focus is on the estimation of a cross-level interaction effect that presents the difference over time on the patient-reported outcome between two treatment groups. The ML-MIMIC model is an SEM-type model that considers the relationship between the indicators and the latent variable in a multilevel setting, whereas the univariate multilevel and mixed ANOVA model rely on sum scores to approximate the latent variable. In addition, the mixed ANOVA model uses aggregated second-level means as outcome. This study showed that the ML-MIMIC model produced unbiased cross-level interaction effect estimates when the relationships between the indicators and the latent variable being measured varied across indicators. In contrast, under similar conditions, the univariate multilevel and mixed ANOVA model underestimated the cross-level interaction effect

    Sample size determination for Bayesian ANOVAs with informative hypotheses

    Get PDF
    Researchers can express their expectations with respect to the group means in an ANOVA model through equality and order constrained hypotheses. This paper introduces the R package SSDbain, which can be used to calculate the sample size required to evaluate (informative) hypotheses using the Approximate Adjusted Fractional Bayes Factor (AAFBF) for one-way ANOVA models as implemented in the R package bain. The sample size is determined such that the probability that the Bayes factor is larger than a threshold value is at least η when either of the hypotheses under consideration is true. The Bayesian ANOVA, Bayesian Welch's ANOVA, and Bayesian robust ANOVA are available. Using the R package SSDbain and/or the tables provided in this paper, researchers in the social and behavioral sciences can easily plan the sample size if they intend to use a Bayesian ANOVA

    Enhancing the effect of psychotherapy through systematic client feedback in outpatient mental healthcare:A cluster randomized trial

    Get PDF
    Objective: Systematic client feedback (SCF), the regular monitoring and informing of patients’ progress during therapy to patient and therapist, has been found to have effects on treatment outcomes varying from very positive to slightly negative. Several prior studies have been biased by researcher allegiance or lack of an independent outcome measure. The current study has taken this into account and aims to clarify the effects of SCF in outpatient psychological treatment. Method: Outpatients (n = 1733) of four centers offering brief psychological treatments were cluster randomized to either treatment as usual (TAU) or TAU with SCF based on the Partners for Change Outcome Management System (PCOMS). Primary outcome measure was the Outcome Questionnaire (OQ-45). Effects of the two treatment conditions on treatment outcome, patient satisfaction, dropout rate, costs, and treatment duration were assessed using a three-level multilevel analysis. DSM-classification, sex, and age of each patient were included as covariates. Results: In both analyses, SCF significantly improved treatment outcome, particularly in the first three months. No significant effects were found on the other outcome variables. Conclusions: Addition of systematic client feedback to treatment as usual, is likely to have a beneficial impact in outpatient psychological treatment. Implementation requires a careful plan of action. Clinical or methodological significance of this article: This study, with large sample size and several independent outcome measures, provides strong evidence that addition of systematic client feedback to outpatient psychological treatment can have a beneficial effect on treatment outcome (symptoms and wellbeing), particularly in the first three months. However, implementation requires a careful plan of action

    Data Collection Expert Prior Elicitation in Survey Design: Two Case Studies

    Get PDF
    Data collection staff involved in sampling designs, monitoring and analysis of surveys often have a good sense of the response rate that can be expected in a survey, even when this survey is new or done at a relatively low frequency. They make expectations of response rates, and, subsequently, costs on an almost continuous basis. Rarely, however, are these expectations formally structured. Furthermore, the expectations usually are point estimates without any assessment of precision or uncertainty. In recent years, the interest in adaptive survey designs has increased. These designs lean heavily on accurate estimates of response rates and costs. In order to account for inaccurate estimates, a Bayesian analysis of survey design parameters is very sensible. The combination of strong intrinsic knowledge of data collection staff and a Bayesian analysis is a natural next step. In this article, prior elicitation is developed for design parameters with the help of data collection staff. The elicitation is applied to two case studies in which surveys underwent a major redesign and direct historic survey data was unavailable
    corecore