57,413 research outputs found

    Sample size calculation for a stepped wedge trial.

    Get PDF
    BACKGROUND: Stepped wedge trials (SWTs) can be considered as a variant of a clustered randomised trial, although in many ways they embed additional complications from the point of view of statistical design and analysis. While the literature is rich for standard parallel or clustered randomised clinical trials (CRTs), it is much less so for SWTs. The specific features of SWTs need to be addressed properly in the sample size calculations to ensure valid estimates of the intervention effect. METHODS: We critically review the available literature on analytical methods to perform sample size and power calculations in a SWT. In particular, we highlight the specific assumptions underlying currently used methods and comment on their validity and potential for extensions. Finally, we propose the use of simulation-based methods to overcome some of the limitations of analytical formulae. We performed a simulation exercise in which we compared simulation-based sample size computations with analytical methods and assessed the impact of varying the basic parameters to the resulting sample size/power, in the case of continuous and binary outcomes and assuming both cross-sectional data and the closed cohort design. RESULTS: We compared the sample size requirements for a SWT in comparison to CRTs based on comparable number of measurements in each cluster. In line with the existing literature, we found that when the level of correlation within the clusters is relatively high (for example, greater than 0.1), the SWT requires a smaller number of clusters. For low values of the intracluster correlation, the two designs produce more similar requirements in terms of total number of clusters. We validated our simulation-based approach and compared the results of sample size calculations to analytical methods; the simulation-based procedures perform well, producing results that are extremely similar to the analytical methods. We found that usually the SWT is relatively insensitive to variations in the intracluster correlation, and that failure to account for a potential time effect will artificially and grossly overestimate the power of a study. CONCLUSIONS: We provide a framework for handling the sample size and power calculations of a SWT and suggest that simulation-based procedures may be more effective, especially in dealing with the specific features of the study at hand. In selected situations and depending on the level of intracluster correlation and the cluster size, SWTs may be more efficient than comparable CRTs. However, the decision about the design to be implemented will be based on a wide range of considerations, including the cost associated with the number of clusters, number of measurements and the trial duration

    Statistical analysis of the primary outcome in acute stroke trials

    Get PDF
    Common outcome scales in acute stroke trials are ordered categorical or pseudocontinuous in structure but most have been analyzed as binary measures. The use of fixed dichotomous analysis of ordered categorical outcomes after stroke (such as the modified Rankin Scale) is rarely the most statistically efficient approach and usually requires a larger sample size to demonstrate efficacy than other approaches. Preferred statistical approaches include sliding dichotomous, ordinal, or continuous analyses. Because there is no best approach that will work for all acute stroke trials, it is vital that studies are designed with a full understanding of the type of patients to be enrolled (in particular their case mix, which will be critically dependent on their age and severity), the potential mechanism by which the intervention works (ie, will it tend to move all patients somewhat, or some patients a lot, and is a common hazard present), a realistic assessment of the likely effect size, and therefore the necessary sample size, and an understanding of what the intervention will cost if implemented in clinical practice. If these approaches are followed, then the risk of missing useful treatment effects for acute stroke will diminish

    A New Method for Calculating Arrival Distribution of Ultra-High Energy Cosmic Rays above 10^19 eV with Modifications by the Galactic Magnetic Field

    Full text link
    We present a new method for calculating arrival distribution of UHECRs including modifications by the galactic magnetic field. We perform numerical simulations of UHE anti-protons, which are injected isotropically at the earth, in the Galaxy and record the directions of velocities at the earth and outside the Galaxy for all of the trajectories. We then select some of them so that the resultant mapping of the velocity directions outside the Galaxy of the selected trajectories corresponds to a given source location scenario, applying Liouville's theorem. We also consider energy loss processes of UHE protons in the intergalactic space. Applying this method to our source location scenario which is adopted in our recent study and can explain the AGASA observation above 4 \times 10^{19} eV, we calculate the arrival distribution of UHECRs including lower energy (E>10^19 eV) ones. We find that our source model can reproduce the large-scale isotropy and the small-scale anisotropy on UHECR arrival distribution above 10^19 eV observed by the AGASA. We also demonstrate the UHECR arrival distribution above 10^19 eV with the event number expected by future experiments in the next few years. The interesting feature of the resultant arrival distribution is the arrangement of the clustered events in the order of their energies, reflecting the directions of the galactic magnetic field. This is also pointed out by Alvarez-Muniz et al.(2002). This feature will allow us to obtain some kind of information about the composition of UHECRs and the magnetic field with increasing amount of data.Comment: 10 pages, 8 figures, to appear in the Astrophysical Journa

    Steps toward the power spectrum of matter. II. The biasing correction with sigma_8 normalization

    Full text link
    A new method to determine the bias parameter of galaxies relative to matter is suggested. The method is based on the assumption that gravity is the dominating force which determines the formation of the structure in the Universe. Due to gravitational instability the galaxy formation is a threshold process: in low-density environments galaxies do not form and matter remains in primordial form. We investigate the influence of the presence of void and clustered populations to the power spectrum of matter and galaxies. The power spectrum of galaxies is similar to the power spectrum of matter; the fraction of total matter in the clustered population determines the difference between amplitudes of fluctuations of matter and galaxies, i.e. the bias factor. To determine the fraction of matter in voids and clustered population we perform numerical simulations. The fraction of matter in galaxies at the present epoch is found using a calibration through the sigma_8 parameter.Comment: LaTex (sty files added), 31 pages, 4 PostScript figures embedded, Astrophysical Journal (accepted

    Increasing Distributed Generation Penetration using Soft Normally-Open Points

    No full text
    This paper considers the effects of various voltage control solutions on facilitating an increase in allowable levels of distributed generation installation before voltage violations occur. In particular, the voltage control solution that is focused on is the implementation of `soft' normally-open points (SNOPs), a term which refers to power electronic devices installed in place of a normally-open point in a medium-voltage distribution network which allows for control of real and reactive power flows between each end point of its installation sites. While other benefits of SNOP installation are discussed, the intent of this paper is to determine whether SNOPs are a viable alternative to other voltage control strategies for this particular application. As such, the SNOPs ability to affect the voltage profile along feeders within a distribution system is focused on with other voltage control options used for comparative purposes. Results from studies on multiple network models with varying topologies are presented and a case study which considers economic benefits of increasing feasible DG penetration is also given

    Nuclear Matter on a Lattice

    Get PDF
    We investigate nuclear matter on a cubic lattice. An exact thermal formalism is applied to nucleons with a Hamiltonian that accommodates on-site and next-neighbor parts of the central, spin- and isospin-exchange interactions. We describe the nuclear matter Monte Carlo methods which contain elements from shell model Monte Carlo methods and from numerical simulations of the Hubbard model. We show that energy and basic saturation properties of nuclear matter can be reproduced. Evidence of a first-order phase transition from an uncorrelated Fermi gas to a clustered system is observed by computing mechanical and thermodynamical quantities such as compressibility, heat capacity, entropy and grand potential. We compare symmetry energy and first sound velocities with literature and find reasonable agreement.Comment: 23 pages, 8 figures (some in color), to be submitted to Phys. Rev.
    corecore