580 research outputs found

    JM: An R Package for the Joint Modelling of Longitudinal and Time-to-Event Data

    Get PDF
    In longitudinal studies measurements are often collected on different types of outcomes for each subject. These may include several longitudinally measured responses (such as blood values relevant to the medical condition under study) and the time at which an event of particular interest occurs (e.g., death, development of a disease or dropout from the study). These outcomes are often separately analyzed; however, in many instances, a joint modeling approach is either required or may produce a better insight into the mechanisms that underlie the phenomenon under study. In this paper we present the R package JM that fits joint models for longitudinal and time-to-event data.

    Quasi-likelihood Ratio Tests for Homoscedasticity of Variance in Linear Regression

    Get PDF
    Two quasi-likelihood ratio tests are proposed for the homoscedasticity assumption in the linear regression models. They require few assumptions than the existing tests. The properties of the tests are investigated through simulation studies. An example is provided to illustrate the usefulness of the new proposed tests

    A homoscedasticity test for the accelerated failure time model

    Get PDF
    The semiparametric accelerated failure time (AFT) model is a popular linear model in survival analysis. AFT model and its associated inference methods assume homoscedasticity of the survival data. It is shown that violation of this assumption will lead to inefficient parameter estimation and anti-conservative confidence interval estimation, and thus, misleading conclusions in survival data analysis. However, there is no valid statistical test proposed to test the homoscedasticity assumption. In this paper, we propose the first novel quasi-likelihood ratio test for the homoscedasticity assumption in the AFT model. Simulation studies show the test performs well. A real dataset is used to demonstrate the usefulness of the developed test

    JM: An R package for the joint modelling of longitudinal and time-to-event data

    Get PDF
    In longitudinal studies measurements are often collected on different types of outcomes for each subject. These may include several longitudinally measured responses (such as blood values relevant to the medical condition under study) and the time at which an event of particular interest occurs (e.g., death, development of a disease or dropout from the study). These outcomes are often separately analyzed; however, in many instances, a joint modeling approach is either required or may produce a better insight into the mechanisms that underlie the phenomenon under study. In this paper we present the R package JM that fits joint models for longitudinal a

    Flexible Partially Linear Single Index Regression Models for Multivariate Survival Data

    Get PDF
    Survival regression models usually assume that covariate effects have a linear form. In many circumstances, however, the assumption of linearity may be violated. The present work addresses this limitation by adding nonlinear covariate effects to survival models. Nonlinear covariates are handled using a single index structure, which allows high-dimensional nonlinear effects to be reduced to a scalar term. The nonlinear single index approach is applied to modeling of survival data with multivariate responses, in three popular models: the proportional hazards (PH) model, the proportional odds (PO) model, and the generalized transformation model. Another extension of the PH and PO model is the handling of the baseline function. Instead of modeling it in a parametric way, which is fairly restrictive, or leaving it unspecified, which makes it impossible to calculate the survival and hazard functions, a weakly parametric approach is used here. As a result, the full likelihood can be applied for inference. The new developments are realized by adding a number of weakly parametric elements to the standard parametric regression models. The marginal baseline hazard functions are modeled using piecewise constants. Marginal survival functions are combined in using copula models, such as the Clayton model, to incorporate association among the multivariate responses. The nonlinear covariate effect is brought into the model through a smooth function with the single-index structure as the input. The smooth function is modeled using a spline. The performance of the PH, PO, and transformation models with the proposed extensions is evaluated through extensive simulation studies. The PH and PO models are also applied to a real-world data set. The results suggest that the proposed methods can capture the nonlinear covariate effects well, and that there is benefit to modeling the association between the correlated responses. Individual-level survival or hazard function estimates also provide information of interest to researchers. The proposed transformation model in particular is very promising. Some discussion of how this model may be further developed is provided

    Semiparametric Regression During 2003–2007

    Get PDF
    Semiparametric regression is a fusion between parametric regression and nonparametric regression and the title of a book that we published on the topic in early 2003. We review developments in the field during the five year period since the book was written. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application

    A cox proportional hazard model for mid-point imputed interval censored data

    Get PDF
    There has been an increasing interest in survival analysis with interval-censored data, where the event of interest (such as infection with a disease) is not observed exactly but only known to happen between two examination times. However, because so much research has been focused on right-censored data, so many statistical tests and techniques are available for right-censoring methods, hence interval-censoring methods are not as abundant as those for right-censored data. In this study, right-censoring methods are used to fit a proportional hazards model to some interval-censored data. Transformation of the interval-censored observations was done using a method called mid-point imputation, a method which assumes that an event occurs at some midpoint of its recorded interval. Results obtained gave conservative regression estimates but a comparison with the conventional methods showed that the estimates were not significantly different. However, the censoring mechanism and interval lengths should be given serious consideration before deciding on using mid-point imputation on interval-censored data

    Modelling Financial High Frequency Data Using Point Processes

    Get PDF
    In this chapter written for a forthcoming Handbook of Financial Time Series to be published by Springer-Verlag, we review the econometric literature on dynamic duration and intensity processes applied to high frequency financial data, which was boosted by the work of Engle and Russell (1997) on autoregressive duration modelsDuration, Intensity, Point process, High frequency data, ACD models
    corecore