377 research outputs found

    Merging Data Sources to Predict Remaining Useful Life – An Automated Method to Identify Prognostic Parameters

    Get PDF
    The ultimate goal of most prognostic systems is accurate prediction of the remaining useful life (RUL) of individual systems or components based on their use and performance. This class of prognostic algorithms is termed Degradation-Based, or Type III Prognostics. As equipment degrades, measured parameters of the system tend to change; these sensed measurements, or appropriate transformations thereof, may be used to characterize degradation. Traditionally, individual-based prognostic methods use a measure of degradation to make RUL estimates. Degradation measures may include sensed measurements, such as temperature or vibration level, or inferred measurements, such as model residuals or physics-based model predictions. Often, it is beneficial to combine several measures of degradation into a single parameter. Selection of an appropriate parameter is key for making useful individual-based RUL estimates, but methods to aid in this selection are absent in the literature. This dissertation introduces a set of metrics which characterize the suitability of a prognostic parameter. Parameter features such as trendability, monotonicity, and prognosability can be used to compare candidate prognostic parameters to determine which is most useful for individual-based prognosis. Trendability indicates the degree to which the parameters of a population of systems have the same underlying shape. Monotonicity characterizes the underlying positive or negative trend of the parameter. Finally, prognosability gives a measure of the variance in the critical failure value of a population of systems. By quantifying these features for a given parameter, the metrics can be used with any traditional optimization technique, such as Genetic Algorithms, to identify the optimal parameter for a given system. An appropriate parameter may be used with a General Path Model (GPM) approach to make RUL estimates for specific systems or components. A dynamic Bayesian updating methodology is introduced to incorporate prior information in the GPM methodology. The proposed methods are illustrated with two applications: first, to the simulated turbofan engine data provided in the 2008 Prognostics and Health Management Conference Prognostics Challenge and, second, to data collected in a laboratory milling equipment wear experiment. The automated system was shown to identify appropriate parameters in both situations and facilitate Type III prognostic model development

    Evolving forecasting classifications and applications in health forecasting

    Get PDF
    Health forecasting forewarns the health community about future health situations and disease episodes so that health systems can better allocate resources and manage demand. The tools used for developing and measuring the accuracy and validity of health forecasts commonly are not defined although they are usually adapted forms of statistical procedures. This review identifies previous typologies used in classifying the forecasting methods commonly used in forecasting health conditions or situations. It then discusses the strengths and weaknesses of these methods and presents the choices available for measuring the accuracy of health-forecasting models, including a note on the discrepancies in the modes of validation

    A Statistical Approach to the Alignment of fMRI Data

    Get PDF
    Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available

    MULTIVARIATE FINITE MIXTURE GROUP-BASED TRAJECTORY MODELING WITH APPLICATION TO MENTAL HEALTH STUDIES

    Get PDF
    ABSTRACT Traditionally, two kinds of methods are applied in trajectory analysis: 1) hierarchical modeling based on a multilevel structure, or 2) latent growth curve modeling (LGCM) based on a covariance structure (Raudenbush & Bryk, 2002; Bollen & Curran, 2006). However, this thesis used a third trajectory analysis method: group-based trajectory modeling (GBTM). GBTM was an extension of the finite mixture modeling (FMM) method that has been widely used in various fields of trajectory analysis in the last 25 years (Nagin & Odgers, 2010). GBTM was able to detect unobserved subgroups based on the multinomial logit function (Nagin, 1999). As an extended form of FMM, GBTM parameters could be estimated using maximum likelihood estimation (MLE) procedures. Since FMMs had no closed-form solution to the maximum likelihood, the Expectation-Maximization (EM) algorithm would often be applied to find maximized likelihood (Schlattmann, 2009). However, GBTM used a different optimization method called the Quasi-Newton procedure to perform the maximization. This thesis studied both GBTM with a single outcome and trajectory modeling with multiple outcomes. Nagin constructed two extended trajectory models that can involve multiple outcomes. Group-based dual trajectory modeling (GBDTM) deals with two outcomes combined with comorbidity or heterotypic continuity, while group-based multi-trajectory modeling (GBMTM) could include more than two outcomes in one model with the same subgroup weights among the outcomes (Nagin, 2005; Nagin, Jones, Passos, & Tremblay, 2018; Nagin & Tremblay, 2001). The methodology was applied to the Korea health panel survey (KHPS) data, which included 3983 individuals who were 65 years old or older at the baseline. GBTM, GBDTM, and GBMTM were three approaches performed with two binary longitudinal outcomes - depression and anxiety. GBDTM was selected as the best model with this data set because it is more flexible than GBMTM when handling group membership, and unlike GBTM, GMDTM addressed the interrelationship between the outcomes based on conditional probability. Four iii depression trajectories were identified across eight years of follow-up: “low-flat” (n = 3641; 87.0%), “low-to-middle” (n = 205; 8.8%), “low-to-high” (n = 33; 1.3%) and “high-curve” (n = 104; 2.8%). Also, four anxiety trajectories were identified with: “low-flat” (n =3785; 92.5%), “low-to-middle” (n = 96; 4.7%), “high-to-low” (n =89; 2.2%) and “high-curve” (n = 13; 0.6%) trajectory groups. Female sex, the presence of more than three chronic diseases, and income-generating activity were significant risk factors for depression trajectory groups. Anxiety trajectory groups had the same risk factors except for the presence of more than three chronic diseases. To further study the GBTM, GBDTM and GBMTM approach, the simulation study was also performed based on two correlated repeatedly measured binary outcomes. Compared based on these two outcomes with different correlation levels (ρ = 0.1, 0.2, 0.4, 0.6). GBDTM was always a better model than GBTM when we were interested in the association between the two outcomes. GBMTM could be used instead of GBDTM when the correlation coefficients between two longitudinal outcomes were high

    Application of the Distributed Lag Models for Examining Associations Between the Built Environment and Obesity Risk in Children (QUALITY cohort)

    Get PDF
    Features of the neighbourhood environment are associated with physical activity and nutrition habits in children and may be a key determinant for obesity risk. Studies commonly use a fixed, pre-specified buffer size for the spatial scale to construct environment measures and apply traditional methods of linear regression to calculate risk estimates. However, incorrect spatial scales can introduce biases. Whether the spatial scale changes depending on a person’s age and sex is largely unknown. Distributed lag models (DLM) were recently proposed as an alternative methodology to fixed, pre-specified buffers. The DLM coefficients follow a smooth association over distance, and a pre-specification of buffer size is not required. Therefore, the DLMs may provide a more accurate estimation of association strength, as well as the point in which the association disappears or is no longer clinically meaningful. Using a subsample of the QUALITY cohort (an ongoing longitudinal investigation of the natural history of obesity in Quebec youth, N=281, Mean(age)=9.6 at baseline), we aimed to apply the DLM to determine whether the association between the residential neighbourhood built environment (BE) and obesity risk in children differed depending on age and sex. A second objective aimed to compare the DLM model with that of a linear regression model (which used pre-specified circular buffer sizes). Different distances of association between the Retail Food Environment and BMI z-score were obtained for 1st and 2nd follow-ups, which also varies by sex. No significant association between the Recreational Facilities Environment and MVPA were detected

    Statistical modeling of physical activity based on accelerometer data

    Get PDF
    This thesis focuses on the objective measurement of physical activity (PA), recorded by accelerometers. Chapter 2 describes the objective measurement of PA using accelerometers in contrast to subjective measurements like PA questionnaires. Chapter 3 presents the basic assumption on PA. Contrary to the cutpoint method, it is more realistic to assume that human activity behavior consists of a sequence of non-overlapping, distinguishable activities that can be represented by a mean intensity level. The recorded accelerometer counts scatter around this mean level. In Chapter 4, two novel approaches to better capture PA are developed and implemented. The Hidden Markov models are stochastic models that allow fitting a Markov chain with a predefined number of activities to the data. Expectile regression utilizing the Whittaker smoother with an L0-penalty is introduced as a second innovative approach. Expectile regression is compared to HMMs and the cutpoint method in a simulation study. Chapter 5 presents the results of four studies on PA. Chapter 6 summarizes and discusses the findings of the previous chapters and ends with an outlook on future research

    Novel Methods for Estimation and Inference in Varying Coefficient Models

    Full text link
    Function type parameters relax many model assumptions because of the flexibility and the size of the parameter space. However, the curse of dimensionality has been the biggest challenge in the nonparametric regression area. An advantageous approach to dimension reduction is using basis expansion to approximate infinite parameter space. An even more challenging problem is estimating functions with unique structures, such as functions with zero-effect regions. The main part of this dissertation is working on varying coefficients with zero-effect regions. We propose a novel model that can detect zero-effect regions and estimate the non-zero effects simultaneously. We provide theoretical support for the inference of our proposed estimators. Simulation studies and real data analyses demonstrate the advantage of our models. This dissertation also introduces a new model that considers the additive effects from a novel aspect: estimating the dynamic effect changes. Simulations and real data applications provide comparisons between our model and the existing model.PHDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163251/1/yuanyang_1.pd

    The efficacy of virtual reality in professional soccer

    Get PDF
    Professional soccer clubs have taken an interest to virtual reality, however, only a paucity of evidence exists to support its use in the soccer training ground environment. Further, several soccer virtual reality companies have begun providing solutions to teams, claiming to test specific characteristics of players, yet supportive evidence for certain measurement properties remain absent from the literature. The aims of this thesis were to explore the efficacy of virtual reality being used in the professional football training ground environment. To do so, this thesis looked to explore the fundamental measurement properties of soccer specific virtual reality tests, along with the perceptions of professional coaches, backroom staff, and players that could use virtual reality. The first research study (Chapter 3) aimed to quantify the learning effect during familiarisation trials of a soccer-specific virtual reality task. Thirty-four professional soccer players age, stature, and body mass: mean (SD) 20 (3.4) years; 180 (7) cm; 79 (8) kg, participated in six trials of a virtual reality soccer passing task. The task required participants to receive and pass 30 virtual soccer balls into highlighted mini-goals that surrounded the participant. The number of successful passes were recorded in each trial. The one-sided Bayesian paired samples t-test indicated very strong evidence in favour of the alternative hypothesis (H1)(BF10 = 46.5, d = 0.56 [95% CI = 0.2 to 0.92]) for improvements in total goals scored between trial 1: 13.6 (3.3) and trial 2: 16 (3.3). Further, the Bayesian paired-samples equivalence t-tests indicated strong evidence in favour of H1 (BF10 = 10.2, d = 0.24 [95% CI = -0.09 to 0.57]) for equivalence between trial 4: 16.7 (3.7) and trial 5: 18.2 (4.7); extreme evidence in favour of H1 (BF10 = 132, d = -0.02 [95% CI = -0.34 to 0.30]) for equivalence between trials 5 and 6: 18.1 (3.5); and moderate evidence in favour of H1 (BF10 = 8.4, d = 0.26 [95% CI = -0.08 to 0.59]) for equivalence between trials 4 and 6. Sufficient evidence indicated that a learning effect took place between the first two trials, and that up to five trials might be necessary for performance to plateau in a specific virtual reality soccer passing task.The second research study (Chapter 4) aimed to assess the validity of a soccer passing task by comparing passing ability between virtual reality and real-world conditions. A previously validated soccer passing test was replicated into a virtual reality environment. Twenty-nine soccer players participated in the study which required them to complete as many passes as possible between two rebound boards within 45 s. Counterbalancing determined the condition order, and then for each condition, participants completed four familiarisation trials and two recorded trials, with the best score being used for analysis. Sense of presence and fidelity were also assessed via questionnaires to understand how representative the virtual environments were compared to the real-world. Results showed that between conditions a difference was observed (EMM = -3.9, 95% HDI = -5.1 to -2.7) with the number of passes being greater in the real-world (EMM = 19.7, 95% HDI = 18.6 to 20.7) than in virtual reality (EMM = 15.7, 95% HDI = 14.7 to 16.8). Further, several subjective differences for fidelity between the two conditions were reported, notably the ability to control the ball in virtual reality which was suggested to have been more difficult than in the real-world. The last research study (Chapter 5) aimed to compare and quantify the perceptions of virtual reality use in soccer, and to model behavioural intentions to use this technology. This study surveyed the perceptions of coaches, support staff, and players in relation to their knowledge, expectations, influences, and barriers of using virtual reality via an internet-based questionnaire. To model behavioural intention, modified questions and constructs from the Unified Theory of Acceptance and Use of Technology were used, and the model was analysed through partial least squares structural equation modelling. Respondents represented coaches and support staff (n = 134) and players (n = 64). All respondents generally agreed that virtual reality should be used to improve tactical awareness and cognition, with its use primarily in performance analysis and rehabilitation settings. Generally, coaches and support staff agreed that monetary cost, coach buy-in and limited evidence base were barriers towards its use. In a sub-sample of coaches and support staff without access to virtual reality (n = 123), performance expectancy was the strongest construct in explaining behavioural intention to use virtual reality, followed by facilitating conditions (i.e., barriers) construct which had a negative association with behavioural intention. This thesis aimed to explore the measurement properties of soccer specific virtual reality tests, and the perceptions of staff and players who might use the technology. The key findings from exploring the measurement properties were (1) evidence of a learning curve, suggesting the need for multiple familiarisation trials before collecting data, and (2) a lack of evidence to support the validity of a virtual reality soccer passing test as evident by a lack of agreement to a real-world equivalent. This finding raises questions on the suitability for virtual reality being used to measure passing skill related performance. The key findings from investigating the perceptions of users included, using the technology to improve cognition and tactical awareness, and using it in rehabilitation and performance analysis settings. Future intention to use was generally positive, and driven by performance related factors, yet several barriers exist that may prevent its widespread use. In Chapter 7 of the thesis, a reflective account is presented for the reader, detailing some of the interactions made with coaches, support staff and players in relation to the personal, moral, and ethical challenges faced as a practitioner-researcher, working and studying, respectively, in a professional soccer club
    corecore