650 research outputs found

    Shrinkage Estimation in Multilevel Normal Models

    Full text link
    This review traces the evolution of theory that started when Charles Stein in 1955 [In Proc. 3rd Berkeley Sympos. Math. Statist. Probab. I (1956) 197--206, Univ. California Press] showed that using each separate sample mean from k≥3k\ge3 Normal populations to estimate its own population mean μi\mu_i can be improved upon uniformly for every possible μ=(μ1,...,μk)′\mu=(\mu_1,...,\mu_k)'. The dominating estimators, referred to here as being "Model-I minimax," can be found by shrinking the sample means toward any constant vector. Admissible minimax shrinkage estimators were derived by Stein and others as posterior means based on a random effects model, "Model-II" here, wherein the μi\mu_i values have their own distributions. Section 2 centers on Figure 2, which organizes a wide class of priors on the unknown Level-II hyperparameters that have been proved to yield admissible Model-I minimax shrinkage estimators in the "equal variance case." Putting a flat prior on the Level-II variance is unique in this class for its scale-invariance and for its conjugacy, and it induces Stein's harmonic prior (SHP) on μi\mu_i.Comment: Published in at http://dx.doi.org/10.1214/11-STS363 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Balanced and Robust Randomized Treatment Assignments: The Finite Selection Model for the Health Insurance Experiment and Beyond

    Full text link
    The Finite Selection Model (FSM) was developed by Carl Morris in the 1970s for the design of the RAND Health Insurance Experiment (HIE) (Morris 1979, Newhouse et al. 1993), one of the largest and most comprehensive social science experiments conducted in the U.S. In the FSM, a treatment group at each of its turns selects the available unit that maximally improves the combined quality of its resulting group of units according to a common optimality criterion. In the HIE and beyond, we revisit, formalize, and extend the FSM as a general tool for experimental design. Leveraging the idea of D-optimality, we propose and analyze a new selection criterion in the FSM. The FSM using the D-optimal selection function has no tuning parameters, is affine invariant, and when appropriate retrieves several classical designs such as randomized block and matched-pair designs. For multi-arm experiments, we propose algorithms to generate a fair and random selection order of treatments. We demonstrate FSM's performance in a case study based on the HIE, a simulation study, and in ten randomized studies from the health and social sciences. We recommend the FSM be considered in experimental design for its conceptual simplicity, efficiency, and robustness

    Unifying the named natural exponential families and their relatives

    Get PDF
    Abstract Five of the six univariate natural exponential families (NEF) with quadratic variance functions (QVF), meaning their variances are at most quadratic functions of their means, are the Normal, Poisson, Gamma, Binomial, and Negative Binomial distributions. The sixth is the NEF-CHS, the NEF generated from convolved Hyperbolic Secant distributions. These six NEF-QVFs and their relatives are unified in this paper and in the main diagram

    Polyolefin–polar block copolymers from versatile new macromonomers

    Get PDF
    A new metallocene-based polymerization mechanism is elucidated in which a zirconium hydride center inserts α-methylstyrene at the start of a polymer chain. The hydride is then regenerated by hydrogenation to release a polyolefin containing a single terminal α-methylstyrenyl group. Through the use of the difunctional monomer 1,3-diisopropenylbenzene, this catalytic hydride insertion polymerization is applied to the production of linear polyethylene and ethylene–hexene copolymers containing an isopropenylbenzene end group. Conducting simple radical polymerizations in the presence of this new type of macromonomer leads to diblock copolymers containing a polyolefin attached to an acrylate, methacrylate, vinyl ester, or styrenic segments. The new materials are readily available and exhibit interfacial phenomena, including the mediation of the mixing of immiscible polymer blends

    A Symmetric Approach to Compilation and Decompilation

    Get PDF
    Just as specializing a source interpreter can achieve compilation from a source language to a target language, we observe that specializing a target interpreter can achieve compilation from the target language to the source language. In both cases, the key issue is the choice of whether to perform an evaluation or to emit code that represents this evaluation. We substantiate this observation by specializing two source interpreters and two target interpreters. We first consider a source language of arithmetic expressions and a target language for a stack machine, and then the lambda-calculus and the SECD-machine language. In each case, we prove that the target-to-source compiler is a left inverse of the source-to-target compiler, i.e., it is a decompiler. In the context of partial evaluation, compilation by source-interpreter specialization is classically referred to as a Futamura projection. By symmetry, it seems logical to refer to decompilation by target-interpreter specialization as a Futamura embedding

    Uncovering treatment burden as a key concept for stroke care: a systematic review of qualitative research

    Get PDF
    <b>Background</b> Patients with chronic disease may experience complicated management plans requiring significant personal investment. This has been termed ‘treatment burden’ and has been associated with unfavourable outcomes. The aim of this systematic review is to examine the qualitative literature on treatment burden in stroke from the patient perspective.<p></p> <b>Methods and findings</b> The search strategy centred on: stroke, treatment burden, patient experience, and qualitative methods. We searched: Scopus, CINAHL, Embase, Medline, and PsycINFO. We tracked references, footnotes, and citations. Restrictions included: English language, date of publication January 2000 until February 2013. Two reviewers independently carried out the following: paper screening, data extraction, and data analysis. Data were analysed using framework synthesis, as informed by Normalization Process Theory. Sixty-nine papers were included. Treatment burden includes: (1) making sense of stroke management and planning care, (2) interacting with others, (3) enacting management strategies, and (4) reflecting on management. Health care is fragmented, with poor communication between patient and health care providers. Patients report inadequate information provision. Inpatient care is unsatisfactory, with a perceived lack of empathy from professionals and a shortage of stimulating activities on the ward. Discharge services are poorly coordinated, and accessing health and social care in the community is difficult. The study has potential limitations because it was restricted to studies published in English only and data from low-income countries were scarce.<p></p> <b>Conclusions</b> Stroke management is extremely demanding for patients, and treatment burden is influenced by micro and macro organisation of health services. Knowledge deficits mean patients are ill equipped to organise their care and develop coping strategies, making adherence less likely. There is a need to transform the approach to care provision so that services are configured to prioritise patient needs rather than those of health care systems

    Fifteen new risk loci for coronary artery disease highlight arterial-wall-specific mechanisms

    Get PDF
    Coronary artery disease (CAD) is a leading cause of morbidity and mortality worldwide. Although 58 genomic regions have been associated with CAD thus far, most of the heritability is unexplained, indicating that additional susceptibility loci await identification. An efficient discovery strategy may be larger-scale evaluation of promising associations suggested by genome-wide association studies (GWAS). Hence, we genotyped 56,309 participants using a targeted gene array derived from earlier GWAS results and performed meta-analysis of results with 194,427 participants previously genotyped, totaling 88,192 CAD cases and 162,544 controls. We identified 25 new SNP-CAD associations (P < 5 × 10(-8), in fixed-effects meta-analysis) from 15 genomic regions, including SNPs in or near genes involved in cellular adhesion, leukocyte migration and atherosclerosis (PECAM1, rs1867624), coagulation and inflammation (PROCR, rs867186 (p.Ser219Gly)) and vascular smooth muscle cell differentiation (LMOD1, rs2820315). Correlation of these regions with cell-type-specific gene expression and plasma protein levels sheds light on potential disease mechanisms
    • …
    corecore