214 research outputs found

    Sequential Implementation of Monte Carlo Tests with Uniformly Bounded Resampling Risk

    Full text link
    This paper introduces an open-ended sequential algorithm for computing the p-value of a test using Monte Carlo simulation. It guarantees that the resampling risk, the probability of a different decision than the one based on the theoretical p-value, is uniformly bounded by an arbitrarily small constant. Previously suggested sequential or non-sequential algorithms, using a bounded sample size, do not have this property. Although the algorithm is open-ended, the expected number of steps is finite, except when the p-value is on the threshold between rejecting and not rejecting. The algorithm is suitable as standard for implementing tests that require (re-)sampling. It can also be used in other situations: to check whether a test is conservative, iteratively to implement double bootstrap tests, and to determine the sample size required for a certain power.Comment: Major Revision 15 pages, 4 figure

    Patchy He II reionization and the physical state of the IGM

    Full text link
    We present a Monte-Carlo model of He II reionization by QSOs and its effect on the thermal state of the clumpy intergalactic medium (IGM). The model assumes that patchy reionization develops as a result of the discrete distribution of QSOs. It includes various recipes for the propagation of the ionizing photons, and treats photo-heating self-consistently. The model provides the fraction of He III, the mean temperature in the IGM, and the He II mean optical depth -- all as a function of redshift. It also predicts the evolution of the local temperature versus density relation during reionization. Our findings are as follows: The fraction of He III increases gradually until it becomes close to unity at z∼2.8−3.0z\sim 2.8-3.0. The He II mean optical depth decreases from τ∼10\tau\sim 10 at z≥3.5z\geq 3.5 to τ≤0.5\tau\leq 0.5 at z≤2.5z\leq 2.5. The mean temperature rises gradually between z∼4z\sim 4 and z∼3z\sim 3 and declines slowly at lower redshifts. The model predicts a flattening of the temperature-density relation with significant increase in the scatter during reionization at z∼3z\sim 3. Towards the end of reionization the scatter is reduced and a tight relation is re-established. This scatter should be incorporated in the analysis of the Lyα\alpha forest at z≤3z\leq 3. Comparison with observational results of the optical depth and the mean temperature at moderate redshifts constrains several key physical parameters.Comment: 18 pages, 9 figures; Changed content. Accepted for publication in MNRA

    Minimax Estimation of a Normal Mean Vector for Arbitrary Quadratic Loss and Unknown Covariance Matrix

    Get PDF
    Let X be an observation from a p-variate normal distribution (p ≧ 3) with mean vector θ and unknown positive definite covariance matrix Σ̸. It is desired to estimate θ under the quadratic loss L(δ,θ,Σ̸)=(δ−θ)tQ(δ−θ)/tr(QΣ̸), where Q is a known positive definite matrix. Estimators of the following form are considered: δc(X,W)=(I−cαQ−1W−1/(XtW−1X))X, where W is a p × p random matrix with a Wishart (Σ̸,n) distribution (independent of X), α is the minimum characteristic root of (QW)/( n−p−1) and c is a positive constant. For appropriate values of c,δc is shown to be minimax and better than the usual estimator δ0(X)=X

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Inference for bounded parameters

    Full text link
    The estimation of signal frequency count in the presence of background noise has had much discussion in the recent physics literature, and Mandelkern [1] brings the central issues to the statistical community, leading in turn to extensive discussion by statisticians. The primary focus however in [1] and the accompanying discussion is on the construction of a confidence interval. We argue that the likelihood function and pp-value function provide a comprehensive presentation of the information available from the model and the data. This is illustrated for Gaussian and Poisson models with lower bounds for the mean parameter

    Effectiveness of physiotherapy exercise following hip arthroplasty for osteoarthritis: a systematic review of clinical trials

    Get PDF
    Background: Physiotherapy has long been a routine component of patient rehabilitation following hip joint replacement. The purpose of this systematic review was to evaluate the effectiveness of physiotherapy exercise after discharge from hospital on function, walking, range of motion, quality of life and muscle strength, for osteoarthritic patients following elective primary total hip arthroplasty. Methods: Design: Systematic review, using the Cochrane Collaboration Handbook for Systematic Reviews of Interventions and the Quorom Statement. Database searches: AMED, CINAHL, EMBASE, KingsFund, MEDLINE, Cochrane library (Cochrane reviews, Cochrane Central Register of Controlled Trials, DARE), PEDro, The Department of Health National Research Register. Handsearches: Physiotherapy, Physical Therapy, Journal of Bone and Joint Surgery (Britain) Conference Proceedings. No language restrictions were applied. Selection: Trials comparing physiotherapy exercise versus usual/standard care, or comparing two types of relevant exercise physiotherapy, following discharge from hospital after elective primary total hip replacement for osteoarthritis were reviewed. Outcomes: Functional activities of daily living, walking, quality of life, muscle strength and range of hip joint motion. Trial quality was extensively evaluated. Narrative synthesis plus meta-analytic summaries were performed to summarise the data. Results: 8 trials were identified. Trial quality was mixed. Generally poor trial quality, quantity and diversity prevented explanatory meta-analyses. The results were synthesised and meta-analytic summaries were used where possible to provide a formal summary of results. Results indicate that physiotherapy exercise after discharge following total hip replacement has the potential to benefit patients. Conclusion: Insufficient evidence exists to establish the effectiveness of physiotherapy exercise following primary hip replacement for osteoarthritis. Further well designed trials are required to determine the value of post discharge exercise following this increasingly common surgical procedure

    Cardiorespiratory and perceptual responses to self-regulated and imposed submaximal arm-leg ergometry

    Get PDF
    Purpose: This study compared cardiorespiratory and perceptual responses to exercise using self-regulated and imposed power outputs distributed between the arms and legs. Methods Ten males (age 21.7 ± 3.4 years) initially undertook incremental arm-crank ergometry (ACE) and cycle ergometry (CYC) tests to volitional exhaustion to determine peak power output (Wpeak). Two subsequent tests involved 20-min combined arm–leg ergometry (ALE) trials, using imposed and self-regulated protocols, both of which aimed to elicit an exercising heart rate of 160 beats min−1. During the imposed trial, arm and leg intensity were set at 40% of each ergometer-specific Wpeak. During the self-regulated trial, participants were asked to self-regulate cadence and resistance to achieve the target heart rate. Heart rate (HR), oxygen uptake (V˙O2 ), pulmonary ventilation (V˙E ), and ratings of perceived exertion (RPE) were recorded continuously. Results As expected, there were no differences between imposed and self-regulated trials for HR, V˙O2 , and V˙E (all P ≥ 0.05). However, central RPE and local RPE for the arms were lower during self-regulated compared imposed trials (P ≤ 0.05). Lower RPE during the self-regulated trial was related to preferential adjustments in how the arms (33 ± 5% Wpeak) and legs (46 ± 5% Wpeak) contributed to the exercise intensity. Conclusions: This study demonstrates that despite similar metabolic and cardiovascular strain elicited by imposed and self-regulated ALE, the latter was perceived to be less strenuous, which is related to participants doing more work with the legs and less work with the arms to achieve the target intensity

    Regression analysis with categorized regression calibrated exposure: some interesting findings

    Get PDF
    BACKGROUND: Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile) scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. METHODS: We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC). RESULTS: In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. CONCLUSION: Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a percentile scale. Relating back to the original scale of the exposure solves the problem. The conclusion regards all regression models

    Subtraction of point sources from interferometric radio images through an algebraic forward modelling scheme

    Get PDF
    We present a method for subtracting point sources from interferometric radio images via forward modelling of the instrument response and involving an algebraic non-linear minimization. The method is applied to simulated maps of the Murchison Wide-field Array but is generally useful in cases where only image data are available. After source subtraction, the residual maps have no statistical difference to the expected thermal noise distribution at all angular scales, indicating high effectiveness in the subtraction. Simulations indicate that the errors in recovering the source parameters decrease with increasing signal-to-noise ratio, which is consistent with the theoretical measurement errors. In applying the technique to simulated snapshot observations with the Murchison Wide-field Array, we found that all 101 sources present in the simulation were recovered with an average position error of 10 arcsec and an average flux density error of 0.15 per cent. This led to a dynamic range increase of approximately 3 orders of magnitude. Since all the sources were deconvolved jointly, the subtraction was not limited by source sidelobes but by thermal noise. This technique is a promising deconvolution method for upcoming radio arrays with a huge number of elements and a candidate for the difficult task of subtracting foreground sources from observations of the 21-cm neutral hydrogen signal from the epoch of reionization
    • …
    corecore