1,174 research outputs found

    Optimal design of cluster randomized trials allowing unequal allocation of clusters and unequal cluster size between arms

    Get PDF
    There are sometimes cost, scientific, or logistical reasons to allocate individuals unequally in an individually randomized trial. In cluster randomized trials we can allocate clusters unequally and/or allow different cluster size between trial arms. We consider parallel group designs with a continuous outcome, and optimal designs that require the smallest number of individuals to be measured given the number of clusters. Previous authors have derived the optimal allocation ratio for clusters under different variance and/or intracluster correlations (ICCs) between arms, allowing different but prespecified cluster sizes by arm. We derive closed-form expressions to identify the optimal proportions of clusters and of individuals measured for each arm, thereby defining optimal cluster sizes, when cluster size can be chosen freely. When ICCs differ between arms but the variance is equal, the optimal design allocates more than half the clusters to the arm with the higher ICC, but (typically only slightly) less than half the individuals and hence a smaller cluster size. We also describe optimal design under constraints on the number of clusters or cluster size in one or both arms. This methodology allows trialists to consider a range for the number of clusters in the trial and for each to identify the optimal design. Except if there is clear prior evidence for the ICC and variance by arm, a range of values will need to be considered. Researchers should choose a design with adequate power across the range, while also keeping enough clusters in each arm to permit the intended analysis method

    Optimal design of cluster randomised trials with continuous recruitment and prospective baseline period

    Get PDF
    BACKGROUND: Cluster randomised trials, like individually randomised trials, may benefit from a baseline period of data collection. We consider trials in which clusters prospectively recruit or identify participants as a continuous process over a given calendar period, and ask whether and for how long investigators should collect baseline data as part of the trial, in order to maximise precision. METHODS: We show how to calculate and plot the variance of the treatment effect estimator for different lengths of baseline period in a range of scenarios, and offer general advice. RESULTS: In some circumstances it is optimal not to include a baseline, while in others there is an optimal duration for the baseline. All other things being equal, the circumstances where it is preferable not to include a baseline period are those with a smaller recruitment rate, smaller intracluster correlation, greater decay in the intracluster correlation over time, or wider transition period between recruitment under control and intervention conditions. CONCLUSION: The variance of the treatment effect estimator can be calculated numerically, and plotted against the duration of baseline to inform design. It would be of interest to extend these investigations to cluster randomised trial designs with more than two randomised sequences of control and intervention condition, including stepped wedge designs

    Statistical comparison of InSAR tropospheric correction techniques

    Get PDF
    Correcting for tropospheric delays is one of the largest challenges facing the interferometric synthetic aperture radar (InSAR) community. Spatial and temporal variations in temperature, pressure, and relative humidity create tropospheric signals in InSAR data, masking smaller surface displacements due to tectonic or volcanic deformation. Correction methods using weather model data, GNSS and/or spectrometer data have been applied in the past, but are often limited by the spatial and temporal resolution of the auxiliary data. Alternatively a correction can be estimated from the interferometric phase by assuming a linear or a power-law relationship between the phase and topography. Typically the challenge lies in separating deformation from tropospheric phase signals. In this study we performed a statistical comparison of the state-of-the-art tropospheric corrections estimated from the MERIS and MODIS spectrometers, a low and high spatial-resolution weather model (ERA-I and WRF), and both the conventional linear and new power-law empirical methods. Our test-regions include Southern Mexico, Italy, and El Hierro. We find spectrometers give the largest reduction in tropospheric signal, but are limited to cloud-free and daylight acquisitions. We find a ~ 10–20% RMSE increase with increasing cloud cover consistent across methods. None of the other tropospheric correction methods consistently reduced tropospheric signals over different regions and times. We have released a new software package called TRAIN (Toolbox for Reducing Atmospheric InSAR Noise), which includes all these state-of-the-art correction methods. We recommend future developments should aim towards combining the different correction methods in an optimal manner

    The Sentinel-1 constellation for InSAR applications: Experiences from the InSARAP project

    Get PDF
    The two-satellite Copernicus Sentinel-1 (S1) constellation became operational in Sep 2016, with the successful in-orbit commissioning of the S1B unit. During, the commissioning phase and early operational phase it has been confirmed that the interferometric performance of the constellation is excellent, with no observed phase anomalies. In this work, we show an analysis of selected performance parameters for the S1 constellation, as well as initial results based on the available data from the first months of operations

    Awareness without learning: A preliminary study exploring the effects of beachgoer's experiences on risk taking behaviours

    Full text link
    Most drowning deaths on Australian beaches occur in locations not patrolled by lifeguards. At patrolled locations, where lifeguards supervise flagged areas in which beachgoers are encouraged to swim between, the incidence of drowning is reduced. To date, risk prevention practices on coasts focus on patrolled beaches, deploying warning signs at unpatrolled locations with the aim of raising public awareness of risk. What remains unexplored is the potential for learning and behaviour change that can transfer from patrolled to unpatrolled beaches through beachgoer's experiences and interactions with lifeguards. The aim of this preliminary study is to explore the risk perceptions of beachgoers at a patrolled beach to establish if and how their experiences of beach risk and interactions with lifeguards affect their behaviours. Data was collected in Gerroa, Australia by engaging 49 beachgoers using a mixed survey-interview methodology. Results show that beachgoers are aware that they should ‘swim between the flags’, but many did not know the basis for the positioning of safety flags. A key finding is that beachgoer's express a clear desire for a skills-based model of community engagement that enables learning with lifeguards. This demonstrates a reflective public that desires skill-development, which may transfer from patrolled to unpatrolled beaches to affect broader risk reduction on the Australian coast. Learning how to avoid site-specific rip hazards with lifeguards at the beach presents a promising, and previously unexplored model for beach drowning risk prevention that has the potential to affect behaviour at unpatrolled beaches, providing an empirically-supported alternative to prevailing deficit-based awareness raising methods

    Characterizing and correcting phase biases in short-term, multilooked interferograms

    Get PDF
    Interferometric Synthetic Aperture Radar (InSAR) is widely used to measure deformation of the Earth's surface over large areas and long time periods. A common strategy to overcome coherence loss in long-term interferograms is to use multiple multilooked shorter interferograms, which can cover the same time period but maintain coherence. However, it has recently been shown that using this strategy can introduce a bias (also referred to as a “fading signal”) in the interferometric phase. We isolate the signature of the phase bias by constructing “daisy chain” sums of short-term interferograms of different length covering identical 1-year time intervals. This shows that the shorter interferograms are more affected by this phenomenon and the degree of the effect depends on ground cover types; cropland and forested pixels have significantly larger bias than urban pixels and the bias for cropland mimics subsidence throughout the year, whereas forests mimics subsidence in the spring and heave in the autumn. We, propose a method for correcting the phase bias, based on the assumption, borne out by our observations, that the bias in an interferogram is linearly related to the sum of the bias in shorter interferograms spanning the same time. We tested the algorithm over a study area in western Turkey by comparing average velocities against results from a phase linking approach, which estimates the single primary phases from all the interferometric pairs, and has been shown to be almost insensitive to the phase bias. Our corrected velocities agree well with those from a phase linking approach. Our approach can be applied to global compilations of short-term interferograms and provides accurate long-term velocity estimation without a requirement for coherence in long-term interferograms

    Efficient and flexible simulation-based sample size determination for clinical trials with multiple design parameters

    Get PDF
    Simulation offers a simple and flexible way to estimate the power of a clinical trial when analytic formulae are not available. The computational burden of using simulation has, however, restricted its application to only the simplest of sample size determination problems, often minimising a single parameter (the overall sample size) subject to power being above a target level. We describe a general framework for solving simulation-based sample size determination problems with several design parameters over which to optimise and several conflicting criteria to be minimised. The method is based on an established global optimisation algorithm widely used in the design and analysis of computer experiments, using a non-parametric regression model as an approximation of the true underlying power function. The method is flexible, can be used for almost any problem for which power can be estimated using simulation, and can be implemented using existing statistical software packages. We illustrate its application to a sample size determination problem involving complex clustering structures, two primary endpoints and small sample considerations
    • 

    corecore