165 research outputs found

    A User-Friendly Introduction to Link-Probit-Normal Models

    Get PDF
    Probit-normal models have attractive properties compared to logit-normal models. In particular, they allow for easy specification of marginal links of interest while permitting a conditional random effects structure. Moreover, programming fitting algorithms for probit-normal models can be trivial with the use of well-developed algorithms for approximating multivariate normal quantiles. In typical settings, the data cannot distinguish between probit and logit conditional link functions. Therefore, if marginal interpretations are desired, the default conditional link should be the most convenient one. We refer to models with a probit conditional link an arbitrary marginal link and a normal random effect distribution as link-probit-normal models. In this manuscript we outline these models and discuss appropriate situations for using multivariate normal approximations. Unlike other manuscripts in this area that focus on very general situations and implement Markov chain or MCEM algorithms, we focus on simpler, random intercept settings and give a collection of user-friendly examples and reproducible code. Marginally, the link-probit-normal model is obtained by a non-linear model on a discretized multivariate normal distribution, and thus can be thought of as a special case of discretizing a multivariate T distribution (as the degrees of freedom go to infinity). We also consider the larger class of multivariate T marginal models and illustrate how these models can be used to closely approximate a logit link

    A NOVEL AND SIMPLE RULE OF THUMB FOR MULTIPLICITY CONTROL IN EQUIVALENCE TESTING USING TWO ONE-SIDED TESTS

    Get PDF
    Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given

    Empowering Learning: Standalone, Browser-Only Courses for Seamless Education

    Full text link
    Massive Open Online Courses (MOOCs) have transformed the educational landscape, offering scalable and flexible learning opportunities, particularly in data-centric fields like data science and artificial intelligence. Incorporating AI and data science into MOOCs is a potential means of enhancing the learning experience through adaptive learning approaches. In this context, we introduce PyGlide, a proof-of-concept open-source MOOC delivery system that underscores autonomy, transparency, and collaboration in maintaining course content. We provide a user-friendly, step-by-step guide for PyGlide, emphasizing its distinct advantage of not requiring any local software installation for students. Highlighting its potential to enhance accessibility, inclusivity, and the manageability of course materials, we showcase PyGlide's practical application in a continuous integration pipeline on GitHub. We believe that PyGlide charts a promising course for the future of open-source MOOCs, effectively addressing crucial challenges in online education

    Multilevel functional principal component analysis

    Full text link
    The Sleep Heart Health Study (SHHS) is a comprehensive landmark study of sleep and its impacts on health outcomes. A primary metric of the SHHS is the in-home polysomnogram, which includes two electroencephalographic (EEG) channels for each subject, at two visits. The volume and importance of this data presents enormous challenges for analysis. To address these challenges, we introduce multilevel functional principal component analysis (MFPCA), a novel statistical methodology designed to extract core intra- and inter-subject geometric components of multilevel functional data. Though motivated by the SHHS, the proposed methodology is generally applicable, with potential relevance to many modern scientific studies of hierarchical or longitudinal functional outcomes. Notably, using MFPCA, we identify and quantify associations between EEG activity during sleep and adverse cardiovascular outcomes.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS206 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    JOINTLY MODELING CONTINUOUS AND BINARY OUTCOMES FOR BOOLEAN OUTCOMES: AN APPLICATION TO MODELING HYPERTENSION

    Get PDF
    Binary outcomes defined by logical (Boolean) and or or operations on original continuous and discrete outcomes arise commonly in medical diagnoses and epidemiological research. In this manuscript,we consider applying the “or” operator to two continuous variables above a threshold and a binary variable, a setting that occurs frequently in the modeling of hypertension. Rather than modeling the resulting composite outcome defined by the logical operator, we present a method that models the original outcomes thus utilizing all information in the data, yet continues to yield conclusions on the composite scale. A stratified propensity score adjustment is proposed to account for confounding variables. A Mantel-Haenszel style combination of strata-specific odds ratios is proposed to evaluate a risk factor. The benefits of the proposed approach include easy handling of missing data and the ability to estimate the correlations between the original outcomes. We emphasize that the model retains the ability to evaluate odds ratios on the simpler and more easily interpreted composite scale. The approach is evaluated by Monte Carlo simulations. An example of the analysis of the impact of sleep disordered breathing on a standard composite hypertension measure, based on blood pressure measurements and medication usage,is included

    A SURVEY OF THE LIKELIHOOD APPROACH TO BIOEQUIVALENCE TRIALS

    Get PDF
    Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is equivalent to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence and examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods are a reasonable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. Our study also shows that the standard methods in the current practice of bioequivalence trials offers only weak evidence from the evidential point of view
    • …
    corecore