622 research outputs found

    Some problems related to the rejection of outlying observations

    Get PDF
    The thesis consists of six chapters. The introductory first chapter considers some of the more general problems involved in the detection and rejection of outlying observations, and.describes the general form of the tests discussed in detail in the later chapters.In Chapter 2. likelihood-based criteria are derived for testing for single and multiple outliers at both the upper and the lower ends of samples from gamma distributions. The null distributions of these criteria are obtained by use of a recursive algorithm and the methods are extended to criteria appropriate for testing for multiple outliers occurring at both ends of the sample and to various 'Dixon' criteria. The results are applied to some practical examples.In Chapter 3 likelihood-based tests and criteria for single outliers in univariate normal samples are considered. The null distributions of the criteria are obtained by recursive algorithms. The cases of known and unknown mean and variance are considered separately and the methods are extended to cases where independent estimates of the variance are available. These methods and results are extended in Chapter 4 to tests and criteria for multiple outliers in univariate normal samples. The extensions of the results of both of these chapters to single and multiple outliers in multivariate normal samples are considered in Chapter 6.In Chapter 5 problems of single and multiple outliers in data following a linear model are discussed. A likelihood-based criterion is derived and the extreme tail of the null distribution of this criterion is obtained. Some practical examples on data from a series of chemical experiments are given

    Tracking Vector Magnetograms with the Magnetic Induction Equation

    Full text link
    The differential affine velocity estimator (DAVE) developed in Schuck (2006) for estimating velocities from line-of-sight magnetograms is modified to directly incorporate horizontal magnetic fields to produce a differential affine velocity estimator for vector magnetograms (DAVE4VM). The DAVE4VM's performance is demonstrated on the synthetic data from the anelastic pseudospectral ANMHD simulations that were used in the recent comparison of velocity inversion techniques by Welsch (2007). The DAVE4VM predicts roughly 95% of the helicity rate and 75% of the power transmitted through the simulation slice. Inter-comparison between DAVE4VM and DAVE and further analysis of the DAVE method demonstrates that line-of-sight tracking methods capture the shearing motion of magnetic footpoints but are insensitive to flux emergence -- the velocities determined from line-of-sight methods are more consistent with horizontal plasma velocities than with flux transport velocities. These results suggest that previous studies that rely on velocities determined from line-of-sight methods such as the DAVE or local correlation tracking may substantially misrepresent the total helicity rates and power through the photosphere.Comment: 30 pages, 13 figure

    Alternative sampling for variational quantum Monte Carlo

    Full text link
    Expectation values of physical quantities may accurately be obtained by the evaluation of integrals within Many-Body Quantum mechanics, and these multi-dimensional integrals may be estimated using Monte Carlo methods. In a previous publication it has been shown that for the simplest, most commonly applied strategy in continuum Quantum Monte Carlo, the random error in the resulting estimates is not well controlled. At best the Central Limit theorem is valid in its weakest form, and at worst it is invalid and replaced by an alternative Generalised Central Limit theorem and non-Normal random error. In both cases the random error is not controlled. Here we consider a new `residual sampling strategy' that reintroduces the Central Limit Theorem in its strongest form, and provides full control of the random error in estimates. Estimates of the total energy and the variance of the local energy within Variational Monte Carlo are considered in detail, and the approach presented may be generalised to expectation values of other operators, and to other variants of the Quantum Monte Carlo method.Comment: 14 pages, 9 figure

    Analysis of uncertainty in health care cost-effectiveness studies: an introduction to statistical issues and methods

    Get PDF
    Cost-effectiveness analysis is now an integral part of health technology assessment and addresses the question of whether a new treatment or other health care program offers good value for money. In this paper we introduce the basic framework for decision making with cost-effectiveness data and then review recent developments in statistical methods for analysis of uncertainty when cost-effectiveness estimates are based on observed data from a clinical trial. Although much research has focused on methods for calculating confidence intervals for cost-effectiveness ratios using bootstrapping or Fieller’s method, these calculations can be problematic with a ratio-based statistic where numerator and=or denominator can be zero. We advocate plotting the joint density of cost and effect differences, together with cumulative density plots known as cost-effectiveness acceptability curves (CEACs) to summarize the overall value-for-money of interventions. We also outline the net-benefit formulation of the cost-effectiveness problem and show that it has particular advantages over the standard incremental cost-effectiveness ratio formulation

    Stigma, epistemic injustice, and “looked after children”: the need for a new language

    Get PDF
    This article examines the processes that contribute to the stigmatization of a group of people typically identified as “children in care” or “looked after children.” In particular, we will look at the ways that we (adults, professionals, and carers) interact with these children, based on their status as both children and members of a socially marginalized and disadvantaged group, and how these modes of interaction can inhibit dialogue—a dialogue that is needed if we are to base our conceptions regarding the needs of these children on a more accurate understanding of their experiences and perspective. The problem is particularly challenging because the very terminology we use in the care community to identify this group is a product of the damaging preconceptions that have affected our interactions with its members and, we argue, it serves to reinforce those preconceptions. Using Fricker's work on epistemic injustice, in conjunction with evidence regarding how accusations of abuse and neglect of these children have been addressed in numerous cases, we illustrate the problems we have in hearing the voices of members of this group and the harmful effects this has on their own ability to understand and articulate their experiences. These problems represent “barriers to disclosure” that need to be surmounted if we are to establish a more inclusive dialogue. Currently, dialogue between these children and those of us charged to “look after” them is too often characterized by a lack of trust: not only in terms of the children feeling that their word is not taken seriously, that their claims are not likely to be believed, but also in their feeling that they cannot trust those to whom they might disclose abuse or neglect. The goals of the paper are modest in that we aim simply to open up the debate on how to meet this epistemic challenge, noting that there are specific problems that extend beyond those already identified for hearing the voices of other victims of epistemic injustice. Explicitly recognizing the nature and extent of the problem still leaves us a long way from its solution, but it is a crucial start

    Confidence interval estimation for the changepoint of treatment stratification in the presence of a qualitative covariate-treatment interaction

    Get PDF
    The goal in stratified medicine is to administer the \textquotedblbest\textquotedbl treatment to a patient. Not all patients might benefit from the same treatment; the choice of best treatment can depend on certain patient characteristics. In this article, it is assumed that a time-to-event outcome is considered as a patient-relevant outcome and a qualitative interaction between a continuous covariate and treatment exists, ie,~that patients with different values of one specific covariate should be treated differently. We suggest and investigate different methods for confidence interval estimation for the covariate value, where the treatment recommendation should be changed based on data collected in a randomized clinical trial. An adaptation of Fieller's theorem, the delta method, and different bootstrap approaches (normal, percentile-based, wild bootstrap) are investigated and compared in a simulation study. Extensions to multivariable problems are presented and evaluated. We observed appropriate confidence interval coverage following Fieller's theorem irrespective of sample size but at the cost of very wide or even infinite confidence intervals. The delta method and the wild bootstrap approach provided the smallest intervals but inadequate coverage for small to moderate event numbers, also depending on the location of the true changepoint. For the percentile-based bootstrap, wide intervals were observed, and it was slightly conservative regarding coverage, whereas the normal bootstrap did not provide acceptable results for many scenarios. The described methods were also applied to data from a randomized clinical trial comparing two treatments for patients with symptomatic, severe carotid artery stenosis, considering patient's age as predictive marker

    Simultaneous confidence sets for several effective doses.

    Get PDF
    Construction of simultaneous confidence sets for several effective doses currently relies on inverting the Scheffé type simultaneous confidence band, which is known to be conservative. We develop novel methodology to make the simultaneous coverage closer to its nominal level, for both two-sided and one-sided simultaneous confidence sets. Our approach is shown to be considerably less conservative than the current method, and is illustrated with an example on modeling the effect of smoking status and serum triglyceride level on the probability of the recurrence of a myocardial infarction

    Mathematical Analysis of Copy Number Variation in a DNA Sample Using Digital PCR on a Nanofluidic Device

    Get PDF
    Copy Number Variations (CNVs) of regions of the human genome have been associated with multiple diseases. We present an algorithm which is mathematically sound and computationally efficient to accurately analyze CNV in a DNA sample utilizing a nanofluidic device, known as the digital array. This numerical algorithm is utilized to compute copy number variation and the associated statistical confidence interval and is based on results from probability theory and statistics. We also provide formulas which can be used as close approximations

    Local Difference Measures between Complex Networks for Dynamical System Model Evaluation

    Get PDF
    Acknowledgments We thank Reik V. Donner for inspiring suggestions that initialized the work presented herein. Jan H. Feldhoff is credited for providing us with the STARS simulation data and for his contributions to fruitful discussions. Comments by the anonymous reviewers are gratefully acknowledged as they led to substantial improvements of the manuscript.Peer reviewedPublisher PD
    corecore