30 research outputs found
A time-varying inertia pendulum: Analytical modelling and experimental identification
In this paper two of the main sources of non-stationary dynamics, namely the time-variability and the presence of nonlinearity, are analysed through the analytical and experimental study of a time-varying inertia pendulum. The pendulum undergoes large swinging amplitudes, so that its equation of motion is definitely nonlinear, and hence becomes a nonlinear time-varying system. The analysis is carried out through two subspace-based techniques for the identification of both the linear time-varying system and the nonlinear system. The flexural and the nonlinear swinging motions of the pendulum are uncoupled and are considered separately: for each of them an analytical model is built for comparisons and the identification procedures are developed. The results demonstrate that a good agreement between the predicted and the identified frequencies can be achieved, for both the considered motions. In particular, the estimates of the swinging frequency are very accurate for the entire domain of possible configurations, in terms of swinging amplitude and mass positio
An efficient computational approach for prior sensitivity analysis and cross-validation
Prior sensitivity analysis and cross-validation are important tools in Bayesian statistics. However, due to the computational expense of implementing existing methods, these techniques are rarely used. In this paper, the authors show how it is possible to use sequential Monte Carlo methods to create an efficient and automated algorithm to perform these tasks. They apply the algorithm to the computation of regularization path plots and to assess the sensitivity of the tuning parameter in g-prior model selection. They then demonstrate the algorithm in a cross-validation context and use it to select the shrinkage parameter in Bayesian regression. © 2010 Statistical Society of Canada
Training Load and Its Role in Injury Prevention, Part 2: Conceptual and Methodologic Pitfalls.
In part 2 of this clinical commentary, we highlight the conceptual and methodologic pitfalls evident in current training-load-injury research. These limitations make these studies unsuitable for determining how to use new metrics such as acute workload, chronic workload, and their ratio for reducing injury risk. The main overarching concerns are the lack of a conceptual framework and reference models that do not allow for appropriate interpretation of the results to define a causal structure. The lack of any conceptual framework also gives investigators too many degrees of freedom, which can dramatically increase the risk of false discoveries and confirmation bias by forcing the interpretation of results toward common beliefs and accepted training principles. Specifically, we underline methodologic concerns relating to (1) measure of exposures, (2) pitfalls of using ratios, (3) training-load measures, (4) time windows, (5) discretization and reference category, (6) injury definitions, (7) unclear analyses, (8) sample size and generalizability, (9) missing data, and (10) standards and quality of reporting. Given the pitfalls of previous studies, we need to return to our practices before this research influx began, when practitioners relied on traditional training principles (eg, overload progression) and adjusted training loads based on athletes' responses. Training-load measures cannot tell us whether the variations are increasing or decreasing the injury risk; we recommend that practitioners still rely on their expert knowledge and experience
Training Load and Injury Part 2: Questionable Research Practices Hijack the Truth and Mislead Well-Intentioned Clinicians.
BackgroundIn this clinical commentary, we highlight issues related to conceptual foundations and methods used in training load and injury research. We focus on sources of degrees of freedom that can favor questionable research practices such as P hacking and hypothesizing after the results are known, which can undermine the trustworthiness of research findings.Clinical questionIs the methodological rigor of studies in the training load and injury field sufficient to inform training-related decisions in clinical practice?Key resultsThe absence of a clear conceptual framework, causal structure, and reliable methods can promote questionable research practices, selective reporting, and confirmation bias. The fact that well-accepted training principles (eg, overload progression) are in line with some study findings may simply be a consequence of confirmation bias, resulting from cherry picking and emphasizing results that align with popular beliefs. Identifying evidence-based practical applications, grounded in high-quality research, is not currently possible. The strongest recommendation we can make for the clinician is grounded in common sense: "Do not train too much, too soon"-not because it has been confirmed by studies, but because it reflects accepted generic training principles.Clinical applicationThe training load and injury research field has fundamental conceptual and methodological weaknesses. Therefore, making decisions about planning and modifying training programs for injury reduction in clinical practice, based on available studies, is premature. Clinicians should continue to rely on best practice, experience, and well-known training principles, and consider the potential influence of contextual factors when planning and monitoring training loads. J Orthop Sports Phys Ther 2020;50(10):577-584. Epub 1 Aug 2020. doi:10.2519/jospt.2020.9211
Herded Gibbs Sampling
The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an O(1/T) convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem