22,518 research outputs found

    On the history and use of some standard statistical models

    Full text link
    This paper tries to tell the story of the general linear model, which saw the light of day 200 years ago, and the assumptions underlying it. We distinguish three principal stages (ignoring earlier more isolated instances). The model was first proposed in the context of astronomical and geodesic observations, where the main source of variation was observational error. This was the main use of the model during the 19th century. In the 1920's it was developed in a new direction by R.A. Fisher whose principal applications were in agriculture and biology. Finally, beginning in the 1930's and 40's it became an important tool for the social sciences. As new areas of applications were added, the assumptions underlying the model tended to become more questionable, and the resulting statistical techniques more prone to misuse.Comment: Published in at http://dx.doi.org/10.1214/193940307000000419 the IMS Collections (http://www.imstat.org/publications/imscollections.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Generalizations of the Familywise Error Rate

    Full text link
    Consider the problem of simultaneously testing null hypotheses H_1,...,H_s. The usual approach to dealing with the multiplicity problem is to restrict attention to procedures that control the familywise error rate (FWER), the probability of even one false rejection. In many applications, particularly if s is large, one might be willing to tolerate more than one false rejection provided the number of such cases is controlled, thereby increasing the ability of the procedure to detect false null hypotheses. This suggests replacing control of the FWER by controlling the probability of k or more false rejections, which we call the k-FWER. We derive both single-step and stepdown procedures that control the k-FWER, without making any assumptions concerning the dependence structure of the p-values of the individual tests. In particular, we derive a stepdown procedure that is quite simple to apply, and prove that it cannot be improved without violation of control of the k-FWER. We also consider the false discovery proportion (FDP) defined by the number of false rejections divided by the total number of rejections (defined to be 0 if there are no rejections). The false discovery rate proposed by Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] controls E(FDP). Here, we construct methods such that, for any \gamma and \alpha, P{FDP>\gamma}\le\alpha. Two stepdown methods are proposed. The first holds under mild conditions on the dependence structure of p-values, while the second is more conservative but holds without any dependence assumptions.Comment: Published at http://dx.doi.org/10.1214/009053605000000084 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Does the h-index have predictive power?

    Full text link
    Bibliometric measures of individual scientific achievement are of particular interest if they can be used to predict future achievement. Here we report results of an empirical study of the predictive power of the h-index compared to other indicators. Our findings indicate that the h-index is better than other indicators considered (total citation count, citations per paper, and total paper count) in predicting future scientific achievement. We discuss reasons for the superiority of the h-index.Comment: Sect. V added on combining h and N_c, with new Fig. 11. Other minor changes. To be published in PNA

    Comment: Citation Statistics

    Full text link
    We discuss the paper "Citation Statistics" by the Joint Committee on Quantitative Assessment of Research [arXiv:0910.3529]. In particular, we focus on a necessary feature of "good" measures for ranking scientific authors: that good measures must able to accurately distinguish between authors.Comment: Published in at http://dx.doi.org/10.1214/09-STS285B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Public Health and Epidemiology Informatics: Recent Research and Trends in the United States

    Get PDF
    Objectives To survey advances in public health and epidemiology informatics over the past three years. Methods We conducted a review of English-language research works conducted in the domain of public health informatics (PHI), and published in MEDLINE between January 2012 and December 2014, where information and communication technology (ICT) was a primary subject, or a main component of the study methodology. Selected articles were synthesized using a thematic analysis using the Essential Services of Public Health as a typology. Results Based on themes that emerged, we organized the advances into a model where applications that support the Essential Services are, in turn, supported by a socio-technical infrastructure that relies on government policies and ethical principles. That infrastructure, in turn, depends upon education and training of the public health workforce, development that creates novel or adapts existing infrastructure, and research that evaluates the success of the infrastructure. Finally, the persistence and growth of infrastructure depends on financial sustainability. Conclusions Public health informatics is a field that is growing in breadth, depth, and complexity. Several Essential Services have benefited from informatics, notably, “Monitor Health,” “Diagnose & Investigate,” and “Evaluate.” Yet many Essential Services still have not yet benefited from advances such as maturing electronic health record systems, interoperability amongst health information systems, analytics for population health management, use of social media among consumers, and educational certification in clinical informatics. There is much work to be done to further advance the science of PHI as well as its impact on public health practice

    Edge Enhancement Investigations by Means of Experiments and Simulations

    Get PDF
    Standard neutron imaging procedures are based on the “shadow” of the transmitted radiation, attenuated by the sample material. Under certain conditions significant deviations from pure transmission can be found in the form of enhancement or depression at the edges of the samples. These effects can limit the quantification process in the related regions. Otherwise, an enhancement and improvement of visibility can be achieved e.g. in defect analysis. In systematic studies we investigated the dependency of these effects on the specific material (mainly for common metals), such as the sample-to-detector distance, the beam collimation, the material thickness and the neutron energy. The beam lines ICON and BOA at PSI and ANTARES at TU München were used for these experiments due to their capability for neutron imaging with highest possible spatial resolution (6.5 to 13.5 micro-meter pixel size, respectively) and their cold beam spectrum. Next to the experimental data we used a McStas tool for the description of refraction and reflection features at edges for comparison. Even if minor contributions by coherent in-line propagation phase contrast are underlined, the major effect can be described by refraction of the neutrons at the sample-void interface. Ways to suppress and to magnify the edge effects can be derived from these findings.Fil: Lehmann, E.. Paul Scherrer Institut; SuizaFil: Schulz, M.. Technische Universitat Munchen; AlemaniaFil: Wang, Y.. China Insititute of Atomic Energy; ChinaFil: Tartaglione, Aureliano. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    What does it take to be a star? The role of performance and the media for German soccer players

    Get PDF
    We test existing superstar theories for the German soccer league. We use various measures for individual players’ performance and media presence to analyze whether performance and popularity can explain salaries and superstars in soccer. Moreover, we argue that quantile regression technique should be applied to analyze superstar phenomena instead of OLS used hitherto.Superstars, soccer, quantile regressions, Rosen, Adler

    Fluid thrust control system

    Get PDF
    A pure fluid thrust control system is described for a pump-fed, regeneratively cooled liquid propellant rocket engine. A proportional fluid amplifier and a bistable fluid amplifier control overshoot in the starting of the engine and take it to a predetermined thrust. An ejector type pump is provided in the line between the liquid hydrogen rocket nozzle heat exchanger and the turbine driving the fuel pump to aid in bringing the fluid at this point back into the regular system when it is not bypassed. The thrust control system is intended to function in environments too severe for mechanical controls

    On Optimality of Stepdown and Stepup Multiple Test Procedures

    Full text link
    Consider the multiple testing problem of testing k null hypotheses, where the unknown family of distributions is assumed to satisfy a certain monotonicity assumption. Attention is restricted to procedures that control the familywise error rate in the strong sense and which satisfy a monotonicity condition. Under these assumptions, we prove certain maximin optimality results for some well-known stepdown and stepup procedures.Comment: Published at http://dx.doi.org/10.1214/009053605000000066 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore