2,260 research outputs found

    Goodness-of-fit and generalized estimating equation methods for ordinal responses based on the stereotype model

    Get PDF
    Background: Data with ordinal categories occur in many diverse areas, but methodologies for modeling ordinal data lag severely behind equivalent methodologies for continuous data. There are advantages to using a model specifically developed for ordinal data, such as making fewer assumptions and having greater power for inference. Methods: The ordered stereotype model (OSM) is an ordinal regression model that is more flexible than the popular proportional odds ordinal model. The primary benefit of the OSM is that it uses numeric encoding of the ordinal response categories without assuming the categories are equally-spaced. Results: This article summarizes two recent advances in the OSM: (1) three novel tests to assess goodness-of-fit; (2) a new Generalized Estimating Equations approach to estimate the model for longitudinal studies. These methods use the new spacing of the ordinal categories indicated by the estimated score parameters of the OSM. Conclusions: The recent advances presented can be applied to several fields. We illustrate their use with the well-known arthritis clinical trial dataset. These advances fill a gap in methodologies available for ordinal responses and may be useful for practitioners in many applied fieldsThis research has been supported by Marsden grant E2987-3648 administrated by the Royal Society of New Zealand, by grant 2017 SGR 622 (GRBIO) administrated by the Departament d’Economia i Coneixement de la Generalitat de Catalunya (Spain) and by the Ministerio de Ciencia e Innovación (Spain) [PID2019-104830RB-I00/ DOI (AEI): 10.13039/501100011033].Peer ReviewedPostprint (published version

    Proportional-odds models for repeated composite and long ordinal outcome scales

    Get PDF
    In many medical studies, researchers widely use composite or long ordinal scores, that is, scores that have a large number of categories and a natural ordering often resulting from the sum of a number of short ordinal scores, to assess function or quality of life. Typically, we analyse these using unjustified assumptions of normality for the outcome measure, which are unlikely to be even approximately true. Scores of this type are better analysed using methods reserved for more conventional (short) ordinal scores, such as the proportional-odds model. We can avoid the need for a large number of cut-point parameters that define the divisions between the score categories for long ordinal scores in the proportional-odds model by the inclusion of orthogonal polynomial contrasts. We introduce the repeated measures proportional-odds logistic regression model and describe for long ordinal outcomes modifications to the generalized estimating equation methodology used for parameter estimation. We introduce data from a trial assessing two surgical interventions, briefly describe and re-analyse these using the new model and compare inferences from the new analysis with previously published results for the primary outcome measure (hip function at 12 months postoperatively). We use a simulation study to illustrate how this model also has more general application for conventional short ordinal scores, to select amongst competing models of varying complexity for the cut-point parameters

    Methods for the analysis of ordinal response data in medical image quality assessment.

    Get PDF
    The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimisation, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data, and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart, for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilisation of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care

    Beyond linear regression: A reference for analyzing common data types in discipline based education research

    Get PDF
    [This paper is part of the Focused Collection on Quantitative Methods in PER: A Critical Examination.] A common goal in discipline-based education research (DBER) is to determine how to improve student outcomes. Linear regression is a common technique used to test hypotheses about the effects of interventions on continuous outcomes (such as exam score) as well as control for student nonequivalence in quasirandom experimental designs. (In quasirandom designs, subjects are not randomly assigned to treatments. For example, when treatment is assigned by classroom, and observations are made on students, the design is quasirandom because treatment is assigned to classroom, not subject (students).) However, many types of outcome data cannot be appropriately analyzed with linear regression. In these instances, researchers must move beyond linear regression and implement alternative regression techniques. For example, student outcomes can be measured on binary scales (e.g., pass or fail), tightly bound scales (e.g., strongly agree to strongly disagree), or nominal scales (i.e., different discrete choices for example multiple tracks within a physics major), each necessitating alternative regression techniques. Here, we review extensions of linear modeling—generalized linear models (glms)—and specifically compare five glms that are useful for analyzing DBER data: logistic, binomial, proportional odds (also called ordinal; including censored regression), multinomial, and Poisson (including negative binomial, hurdle, and zero-inflated) regression. We introduce a diagnostic tool to facilitate a researcher’s identification of the most appropriate glm for their own data. For each model type, we explain when, why, and how to implement the regression approach. When: we provide examples of the types of research questions and outcome data that would motivate this regression approach, including citations to articles in the DBER literature. Why: we name which linear regression assumption is violated by the data type. How: we detail implementation and interpretation of this modeling approach in R, including R syntax and code, and how to discuss the regression output in research papers. Code accompanying each analysis can be found in the online github repository that is associated with this paper (https://github.com/ejtheobald/BeyondLinearRegression). This paper is not an exhaustive review of regression techniques, nor does it review nonregression-based analyses. Rather, it aims to compile and summarize regression techniques useful for the most common types of DBER data and provide examples, citations, and heavily annotated R code so that researchers can easily implement the technique in their work

    Design and Analysis of Randomized and Non-randomized Studies: Improving Validity and Reliability

    Get PDF
    The aim of the thesis is to investigate how to optimize the design and analysis of randomized and non-randomized therapeutic studies, in order to increase the validity and reliability of causal treatment effect estimates, specifically in heterogeneous diseases. The following research questions will be addressed: __1)__ What are the benefits of more advanced statistical analyses to estimate treatment effects from RTCs in heterogeneous diseases? a. What is the heterogeneity in acute neurological diseases with regard to baseline severity and further course of the disease? b. What is the potential gain in efficiency of covariate adjustment and proportional odds analysis in RCTs in Guillain-Barré syndrome (GBS)? __2)__ What is the validity and reliability of the RD design compared to an RCT to estimate causal treatment effects? a. What are threats to the validity of the RD design to estimate treatment effects compared to an RCT? b. How efficient is the RD design to estimate treatment effects compared to an RCT? c. What are the potential benefits of an alternative assignment approach in an RD design
    • …
    corecore