51,344 research outputs found

    Vehicle Lateral and Longitudinal Velocity Estimation Using Machine Learning Algorithms

    Get PDF
    Prediction and estimation of states are of great importance for vehicle control and safety. The conventional observers are designed and used vastly in the automotive industry to fulfill this objective. However, the vehicle behavior can be nonlinear or unpredictable, and it is difficult or impossible to use linear or low-degree nonlinear observers to estimate vehicle states. These observers may fail to estimate the states correctly in high slippery roads, combined slip situations, and maneuvers with intense steering inputs. In this study, two kernel regression-based machine learning methods are used to estimate lateral and longitudinal velocities. In the estimations with kernel regression methods, only a limited number of reference points in the vicinity of estimation space are needed. The kernel-based methods do not need training and can be implemented for real-time applications. The estimation methods are capable of estimating lateral or longitudinal velocities with a frequency of more than 50 Hz. The suggested estimation methods can be applied for different vehicles without the need to be changed or modified. The proposed approach is capable of utilizing data of any vehicle by normalization. The resulting solution can be used for the state estimation of any vehicle. Since the estimation methods rely on the local reference points, the lack of rich reference points may be a challenge for these estimation methods. Two healing algorithms are proposed to address this issue and make the reference points richer in the vicinity of the estimation space. The performance of the healing algorithms for different maneuvers is also studied in this thesis. A series of simulation and experimental tests with various road conditions are utilized for validating the estimation performance. Results show that the proposed algorithm can estimate different vehicles' lateral and longitudinal velocities. The estimation methods can estimate the lateral velocity with an error of less than 0.1 m/s. The methods can estimate the longitudinal velocity with an error of less than 2 kph if the proper reference data is provided

    Nonlinear association structures in flexible Bayesian additive joint models

    Full text link
    Joint models of longitudinal and survival data have become an important tool for modeling associations between longitudinal biomarkers and event processes. The association between marker and log-hazard is assumed to be linear in existing shared random effects models, with this assumption usually remaining unchecked. We present an extended framework of flexible additive joint models that allows the estimation of nonlinear, covariate specific associations by making use of Bayesian P-splines. Our joint models are estimated in a Bayesian framework using structured additive predictors for all model components, allowing for great flexibility in the specification of smooth nonlinear, time-varying and random effects terms for longitudinal submodel, survival submodel and their association. The ability to capture truly linear and nonlinear associations is assessed in simulations and illustrated on the widely studied biomedical data on the rare fatal liver disease primary biliary cirrhosis. All methods are implemented in the R package bamlss to facilitate the application of this flexible joint model in practice.Comment: Changes to initial commit: minor language editing, additional information in Section 4, formatting in Supplementary Informatio

    Nonlinear quantile mixed models

    Full text link
    In regression applications, the presence of nonlinearity and correlation among observations offer computational challenges not only in traditional settings such as least squares regression, but also (and especially) when the objective function is non-smooth as in the case of quantile regression. In this paper, we develop methods for the modeling and estimation of nonlinear conditional quantile functions when data are clustered within two-level nested designs. This work represents an extension of the linear quantile mixed models of Geraci and Bottai (2014, Statistics and Computing). We develop a novel algorithm which is a blend of a smoothing algorithm for quantile regression and a second order Laplacian approximation for nonlinear mixed models. To assess the proposed methods, we present a simulation study and two applications, one in pharmacokinetics and one related to growth curve modeling in agriculture.Comment: 26 pages, 8 figures, 8 table

    Detection of risk factors for obesity in early childhood with quantile regression methods for longitudinal data

    Get PDF
    This article compares and discusses three different statistical methods for investigating risk factors for overweight and obesity in early childhood by means of the LISA study, a recent German birth cohort study with 3097 children. Since the definition of overweight and obesity is typically based on upper quantiles (90% and 97%) of the age specific body mass index (BMI) distribution, our aim was to model the influence of risk factors and age on these quantiles while as far as possible taking the longitudinal data structure into account. The following statistical regression models were chosen: additive mixed models, generalized additive models for location, scale and shape (GAMLSS), and distribution free quantile regression models. The methods were compared empirically by cross-validation and for the data at hand no model could be rated superior. Motivated by previous studies we explored whether there is an age-specific skewness of the BMI distribution. The investigated data does not suggest such an effect, even after adjusting for risk factors. Concerning risk factors, our results mainly confirm results obtained in previous studies. From a methodological point of view, we conclude that GAMLSS and distribution free quantile regression are promising approaches for longitudinal quantile regression, requiring, however, further extensions to fully account for longitudinal data structures

    An approach for jointly modeling multivariate longitudinal measurements and discrete time-to-event data

    Full text link
    In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS339 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A semiparametric regression model for paired longitudinal outcomes with application in childhood blood pressure development

    Full text link
    This research examines the simultaneous influences of height and weight on longitudinally measured systolic and diastolic blood pressure in children. Previous studies have shown that both height and weight are positively associated with blood pressure. In children, however, the concurrent increases of height and weight have made it all but impossible to discern the effect of height from that of weight. To better understand these influences, we propose to examine the joint effect of height and weight on blood pressure. Bivariate thin plate spline surfaces are used to accommodate the potentially nonlinear effects as well as the interaction between height and weight. Moreover, we consider a joint model for paired blood pressure measures, that is, systolic and diastolic blood pressure, to account for the underlying correlation between the two measures within the same individual. The bivariate spline surfaces are allowed to vary across different groups of interest. We have developed related model fitting and inference procedures. The proposed method is used to analyze data from a real clinical investigation.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS567 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Marginal analysis of longitudinal count data in long sequences: Methods and applications to a driving study

    Full text link
    Most of the available methods for longitudinal data analysis are designed and validated for the situation where the number of subjects is large and the number of observations per subject is relatively small. Motivated by the Naturalistic Teenage Driving Study (NTDS), which represents the exact opposite situation, we examine standard and propose new methodology for marginal analysis of longitudinal count data in a small number of very long sequences. We consider standard methods based on generalized estimating equations, under working independence or an appropriate correlation structure, and find them unsatisfactory for dealing with time-dependent covariates when the counts are low. For this situation, we explore a within-cluster resampling (WCR) approach that involves repeated analyses of random subsamples with a final analysis that synthesizes results across subsamples. This leads to a novel WCR method which operates on separated blocks within subjects and which performs better than all of the previously considered methods. The methods are applied to the NTDS data and evaluated in simulation experiments mimicking the NTDS.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS507 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore