1,775 research outputs found

    Minimum variance stratification of a finite population

    No full text
    This paper considers the combined problem of allocation and stratification in order to minimise the variance of the expansion estimator of a total, taking into account that the population is finite. The proof of necessary minimum variance conditions utilises the Kuhn-Tucker Theorem. Stratified simple random sampling with non-negligible sampling fractions is an important design in sample surveys. We go beyond limiting assumptions that have often been used in the past, such as that the stratification equals the study variable or that the sampling fractions are small. We discuss what difference the sampling fractions will make for stratification. In particular, in many surveys the sampling fraction equals one for some strata. The main theorem of this paper is applied to two populations with different characteristics, one of them being a business population and the other one a small population of 284 Swedish municipalities. We study empirically the sensitivity of deviations from the optimal solution

    Ethical decisions facing the parish pastor

    Get PDF

    Feeding back Information on Ineligibility from Sample Surveys to the Frame

    No full text
    It is usually discovered in the data collection phase of a survey that some units in the sample are ineligible even if the frame information has indicated otherwise. For example, in many business surveys a nonnegligible proportion of the sampled units will have ceased trading since the latest update of the frame. This information may be fed back to the frame and used in subsequent surveys, thereby making forthcoming samples more efficient by avoiding sampling nonnegligible units. We investigate what effect on survey estimation the process of feeding back information on ineligibility may have, and derive an expression for the bias that can occur as a result of feeding back. The focus is on estimation of the total using the common expansion estimator. We obtain an estimator that is nearly unbiased in the presence of feed back. This estimator relies on consistent estimates of the number of eligible and ineligible units in the population being available

    Estimating the Undercoverage of a Sampling Frame due to Reporting Delays

    No full text
    One of the imperfections of a sampling frame is miscoverage caused by delays in recording real- life events that change the eligibility of population units. For example, new units generally appear on the frame some time after they came into existence and units that have ceased to exist are not removed from the frame immediately. We provide methodology for predicting the undercoverage due to delays in reporting new units. The approach presented here is novel in a business survey context, and is equally applicable to overcoverage due to delays in reporting the closure of units. As a special case, we also predict the number of new-born units per month. The methodology is applied to the principal business register in the UK, maintained by the Office for National Statistics. <br/

    ESTIMATING TEMPORAL ASSOCIATIONS IN ELECTROCORTICOGRAPHIC (ECoG) TIME SERIES WITH FIRST ORDER PRUNING

    Get PDF
    Granger causality (GC) is a statistical technique used to estimate temporal associations in multivariate time series. Many applications and extensions of GC have been proposed since its formulation by Granger in 1969. Here we control for potentially mediating or confounding associations between time series in the context of event-related electrocorticographic (ECoG) time series. A pruning approach to remove spurious connections and simultaneously reduce the required number of estimations to fit the effective connectivity graph is proposed. Additionally, we consider the potential of adjusted GC applied to independent components as a method to explore temporal relationships between underlying source signals. Both approaches overcome limitations encountered when estimating many parameters in multivariate time-series data, an increasingly common predicament in today\u27s brain mapping studies

    Using surface waves recorded by a large mesh of three-element arrays to detect and locate disparate seismic sources

    Get PDF
    Author Posting. © The Authors, 2018. This article is posted here by permission of The Royal Astronomical Society for personal use, not for redistribution. The definitive version was published in Geophysical Journal International 215 (2018): 942–958, doi:10.1093/gji/ggy316.Surface waves recorded by global arrays have proven useful for locating tectonic earthquakes and in detecting slip events depleted in high frequency, such as glacial quakes. We develop a novel method using an aggregation of small- to continental-scale arrays to detect and locate seismic sources with Rayleigh waves at 20–50 s period. The proposed method is a hybrid approach including first dividing a large aperture aggregate array into Delaunay triangular subarrays for beamforming, and then using the resolved surface wave propagation directions and arrival times from the subarrays as data to formulate an inverse problem to locate the seismic sources and their origin times. The approach harnesses surface wave coherence and maximizes resolution of detections by combining measurements from stations spanning the whole U.S. continent. We tested the method with earthquakes, glacial quakes and landslides. The results show that the method can effectively resolve earthquakes as small as ∼M3 and exotic slip events in Greenland. We find that the resolution of the locations is non-uniform with respect to azimuth, and decays with increasing distance between the source and the array when no calibration events are available. The approach has a few advantages: the method is insensitive to seismic event type, it does not require a velocity model to locate seismic sources, and it is computationally efficient. The method can be adapted to real-time applications and can help in identifying new classes of seismic sources.WF is currently supported by the Postdoctoral Scholar Program at the Woods Hole Oceanographic Institution, with funding provided by the Weston Howland Jr. Postdoctoral Scholarship. This work was supported by National Science Foundation grant EAR-1358520 at Scripps Institution of Oceanography, UC San Diego

    COVARIATE-ADJUSTED NONPARAMETRIC ANALYSIS OF MAGNETIC RESONANCE IMAGES USING MARKOV CHAIN MONTE CARLO

    Get PDF
    Permutation tests are useful for drawing inferences from imaging data because of their flexibility and ability to capture features of the brain that are difficult to capture parametrically. However, most implementations of permutation tests ignore important confounding covariates. To employ covariate control in a nonparametric setting we have developed a Markov chain Monte Carlo (MCMC) algorithm for conditional permutation testing using propensity scores. We present the first use of this methodology for imaging data. Our MCMC algorithm is an extension of algorithms developed to approximate exact conditional probabilities in contingency tables, logit, and log-linear models. An application of our non-parametric method to remove potential bias due to the observed covariates is presented
    corecore