262 research outputs found

    Coherent frequentism

    Full text link
    By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.Comment: The confidence-measure theory of inference and decision is explicitly extended to vector parameters of interest. The derivation of upper and lower confidence levels from valid and nonconservative set estimators is formalize

    Inference for bounded parameters

    Full text link
    The estimation of signal frequency count in the presence of background noise has had much discussion in the recent physics literature, and Mandelkern [1] brings the central issues to the statistical community, leading in turn to extensive discussion by statisticians. The primary focus however in [1] and the accompanying discussion is on the construction of a confidence interval. We argue that the likelihood function and pp-value function provide a comprehensive presentation of the information available from the model and the data. This is illustrated for Gaussian and Poisson models with lower bounds for the mean parameter

    Measuring Program Outcome

    Full text link
    The Progress Evaluation Scales (PES) provide an efficient measuring devicefor evaluating current functioning, setting treatment goals, and assessing change over time in clinically relevant aspects of personal, social, and community adjustment. The PES can be completed by patients, significant others, and therapists, making it possible to obtain various points of view of the outcome of mental health services. This article describes the seven domains measured by the PES and the underlying dimensions they were designed to tap, and presents the generalizability, validity, and usefulness of the scales as applied to an adult mental health center population.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67322/2/10.1177_0193841X8100500402.pd

    Semi-numeric simulations of helium reionization and the fluctuating radiation background

    Get PDF
    Recent He II Lyman-alpha forest observations from 2.0 2.7. These results point to a fluctuating He-ionizing background, which may be due to the end of helium reionization of this era. We present a fast, semi-numeric procedure to approximate detailed cosmological simulations. We compute the distribution of dark matter halos, ionization state of helium, and density field at z = 3 in broad agreement with recent simulations. Given our speed and flexibility, we investigate a range of ionizing source and active quasar prescriptions. Spanning a large area of parameter space, we find order-of-magnitude fluctuations in the He II ionization rate in the post-reionization regime. During reionization, the fluctuations are even stronger and develop a bimodal distribution, in contrast to semi-analytic models and the hydrogen equivalent. These distributions indicate a low-level ionizing background even at significant He II fractions

    A frequentist framework of inductive reasoning

    Full text link
    Reacting against the limitation of statistics to decision procedures, R. A. Fisher proposed for inductive reasoning the use of the fiducial distribution, a parameter-space distribution of epistemological probability transferred directly from limiting relative frequencies rather than computed according to the Bayes update rule. The proposal is developed as follows using the confidence measure of a scalar parameter of interest. (With the restriction to one-dimensional parameter space, a confidence measure is essentially a fiducial probability distribution free of complications involving ancillary statistics.) A betting game establishes a sense in which confidence measures are the only reliable inferential probability distributions. The equality between the probabilities encoded in a confidence measure and the coverage rates of the corresponding confidence intervals ensures that the measure's rule for assigning confidence levels to hypotheses is uniquely minimax in the game. Although a confidence measure can be computed without any prior distribution, previous knowledge can be incorporated into confidence-based reasoning. To adjust a p-value or confidence interval for prior information, the confidence measure from the observed data can be combined with one or more independent confidence measures representing previous agent opinion. (The former confidence measure may correspond to a posterior distribution with frequentist matching of coverage probabilities.) The representation of subjective knowledge in terms of confidence measures rather than prior probability distributions preserves approximate frequentist validity.Comment: major revisio

    Reionisation scenarios and the temperature of the IGM

    Full text link
    We examine the temperature structure of the IGM due to the passage of individual ionisation fronts using a radiative transfer (RT) code coupled to a particle-mesh (PM) N-body code. Multiple simulations were performed with different spectra of ionising radiation: a power law (goes as nu^{-0.5}), miniquasar, starburst, and a time-varying spectrum that evolves from a starburst spectrum to a power law. The RT is sufficiently resolved in time and space to correctly model both the ionisation state and the temperature across the ionisation front. We find the post-ionisation temperature of the reionised intergalactic medium (IGM) is sensitive to the spectrum of the source of ionising radiation, which may be used to place strong constraints on the nature of the sources of reionisation. Radiative transfer effects also produce large fluctuations in the HeII to HI number density ratio eta. The spread in values is smaller than measured, except for the time-varying spectrum. For this case, the spread evolves as the spectral nature of the ionising background changes. Large values for eta are found in partially ionised HeII as the power-law spectrum begins to dominate the starburst, suggesting that the large eta values measured may be indicating the onset of the HeII reionisation epoch.Comment: Accepted to MNRAS. Version with high resolution colour figures available at http://www.roe.ac.uk/~ert/Publications/Tittley_Meiksin_07.pd

    Effectiveness of physiotherapy exercise following hip arthroplasty for osteoarthritis: a systematic review of clinical trials

    Get PDF
    Background: Physiotherapy has long been a routine component of patient rehabilitation following hip joint replacement. The purpose of this systematic review was to evaluate the effectiveness of physiotherapy exercise after discharge from hospital on function, walking, range of motion, quality of life and muscle strength, for osteoarthritic patients following elective primary total hip arthroplasty. Methods: Design: Systematic review, using the Cochrane Collaboration Handbook for Systematic Reviews of Interventions and the Quorom Statement. Database searches: AMED, CINAHL, EMBASE, KingsFund, MEDLINE, Cochrane library (Cochrane reviews, Cochrane Central Register of Controlled Trials, DARE), PEDro, The Department of Health National Research Register. Handsearches: Physiotherapy, Physical Therapy, Journal of Bone and Joint Surgery (Britain) Conference Proceedings. No language restrictions were applied. Selection: Trials comparing physiotherapy exercise versus usual/standard care, or comparing two types of relevant exercise physiotherapy, following discharge from hospital after elective primary total hip replacement for osteoarthritis were reviewed. Outcomes: Functional activities of daily living, walking, quality of life, muscle strength and range of hip joint motion. Trial quality was extensively evaluated. Narrative synthesis plus meta-analytic summaries were performed to summarise the data. Results: 8 trials were identified. Trial quality was mixed. Generally poor trial quality, quantity and diversity prevented explanatory meta-analyses. The results were synthesised and meta-analytic summaries were used where possible to provide a formal summary of results. Results indicate that physiotherapy exercise after discharge following total hip replacement has the potential to benefit patients. Conclusion: Insufficient evidence exists to establish the effectiveness of physiotherapy exercise following primary hip replacement for osteoarthritis. Further well designed trials are required to determine the value of post discharge exercise following this increasingly common surgical procedure

    Regression analysis with categorized regression calibrated exposure: some interesting findings

    Get PDF
    BACKGROUND: Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile) scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. METHODS: We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC). RESULTS: In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. CONCLUSION: Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a percentile scale. Relating back to the original scale of the exposure solves the problem. The conclusion regards all regression models

    Subtraction of point sources from interferometric radio images through an algebraic forward modelling scheme

    Get PDF
    We present a method for subtracting point sources from interferometric radio images via forward modelling of the instrument response and involving an algebraic non-linear minimization. The method is applied to simulated maps of the Murchison Wide-field Array but is generally useful in cases where only image data are available. After source subtraction, the residual maps have no statistical difference to the expected thermal noise distribution at all angular scales, indicating high effectiveness in the subtraction. Simulations indicate that the errors in recovering the source parameters decrease with increasing signal-to-noise ratio, which is consistent with the theoretical measurement errors. In applying the technique to simulated snapshot observations with the Murchison Wide-field Array, we found that all 101 sources present in the simulation were recovered with an average position error of 10 arcsec and an average flux density error of 0.15 per cent. This led to a dynamic range increase of approximately 3 orders of magnitude. Since all the sources were deconvolved jointly, the subtraction was not limited by source sidelobes but by thermal noise. This technique is a promising deconvolution method for upcoming radio arrays with a huge number of elements and a candidate for the difficult task of subtracting foreground sources from observations of the 21-cm neutral hydrogen signal from the epoch of reionization
    • …
    corecore