13,880 research outputs found

    Subgroup identification in dose-finding trials via model-based recursive partitioning

    Full text link
    An important task in early phase drug development is to identify patients, which respond better or worse to an experimental treatment. While a variety of different subgroup identification methods have been developed for the situation of trials that study an experimental treatment and control, much less work has been done in the situation when patients are randomized to different dose groups. In this article we propose new strategies to perform subgroup analyses in dose-finding trials and discuss the challenges, which arise in this new setting. We consider model-based recursive partitioning, which has recently been applied to subgroup identification in two arm trials, as a promising method to tackle these challenges and assess its viability using a real trial example and simulations. Our results show that model-based recursive partitioning can be used to identify subgroups of patients with different dose-response curves and improves estimation of treatment effects and minimum effective doses, when heterogeneity among patients is present.Comment: 23 pages, 6 figure

    On the consistency of Fr\'echet means in deformable models for curve and image analysis

    Get PDF
    A new class of statistical deformable models is introduced to study high-dimensional curves or images. In addition to the standard measurement error term, these deformable models include an extra error term modeling the individual variations in intensity around a mean pattern. It is shown that an appropriate tool for statistical inference in such models is the notion of sample Fr\'echet means, which leads to estimators of the deformation parameters and the mean pattern. The main contribution of this paper is to study how the behavior of these estimators depends on the number n of design points and the number J of observed curves (or images). Numerical experiments are given to illustrate the finite sample performances of the procedure

    Clustering Via Nonparametric Density Estimation: the R Package pdfCluster

    Get PDF
    The R package pdfCluster performs cluster analysis based on a nonparametric estimate of the density of the observed variables. After summarizing the main aspects of the methodology, we describe the features and the usage of the package, and finally illustrate its working with the aid of two datasets

    Good, great, or lucky? Screening for firms with sustained superior performance using heavy-tailed priors

    Full text link
    This paper examines historical patterns of ROA (return on assets) for a cohort of 53,038 publicly traded firms across 93 countries, measured over the past 45 years. Our goal is to screen for firms whose ROA trajectories suggest that they have systematically outperformed their peer groups over time. Such a project faces at least three statistical difficulties: adjustment for relevant covariates, massive multiplicity, and longitudinal dependence. We conclude that, once these difficulties are taken into account, demonstrably superior performance appears to be quite rare. We compare our findings with other recent management studies on the same subject, and with the popular literature on corporate success. Our methodological contribution is to propose a new class of priors for use in large-scale simultaneous testing. These priors are based on the hypergeometric inverted-beta family, and have two main attractive features: heavy tails and computational tractability. The family is a four-parameter generalization of the normal/inverted-beta prior, and is the natural conjugate prior for shrinkage coefficients in a hierarchical normal model. Our results emphasize the usefulness of these heavy-tailed priors in large multiple-testing problems, as they have a mild rate of tail decay in the marginal likelihood m(y)m(y)---a property long recognized to be important in testing.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS512 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Third Gravitational Lensing Accuracy Testing (GREAT3) Challenge Handbook

    Full text link
    The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is the third in a series of image analysis challenges, with a goal of testing and facilitating the development of methods for analyzing astronomical images that will be used to measure weak gravitational lensing. This measurement requires extremely precise estimation of very small galaxy shape distortions, in the presence of far larger intrinsic galaxy shapes and distortions due to the blurring kernel caused by the atmosphere, telescope optics, and instrumental effects. The GREAT3 challenge is posed to the astronomy, machine learning, and statistics communities, and includes tests of three specific effects that are of immediate relevance to upcoming weak lensing surveys, two of which have never been tested in a community challenge before. These effects include realistically complex galaxy models based on high-resolution imaging from space; spatially varying, physically-motivated blurring kernel; and combination of multiple different exposures. To facilitate entry by people new to the field, and for use as a diagnostic tool, the simulation software for the challenge is publicly available, though the exact parameters used for the challenge are blinded. Sample scripts to analyze the challenge data using existing methods will also be provided. See http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/ for more information.Comment: 30 pages, 13 figures, submitted for publication, with minor edits (v2) to address comments from the anonymous referee. Simulated data are available for download and participants can find more information at http://great3.projects.phys.ucl.ac.uk/leaderboard
    corecore