24 research outputs found

    PAWL-Forced Simulated Tempering

    No full text

    An efficient computational approach for prior sensitivity analysis and cross-validation

    No full text
    Prior sensitivity analysis and cross-validation are important tools in Bayesian statistics. However, due to the computational expense of implementing existing methods, these techniques are rarely used. In this paper, the authors show how it is possible to use sequential Monte Carlo methods to create an efficient and automated algorithm to perform these tasks. They apply the algorithm to the computation of regularization path plots and to assess the sensitivity of the tuning parameter in g-prior model selection. They then demonstrate the algorithm in a cross-validation context and use it to select the shrinkage parameter in Bayesian regression. © 2010 Statistical Society of Canada

    Training Load and Injury Part 2: Questionable Research Practices Hijack the Truth and Mislead Well-Intentioned Clinicians.

    Full text link
    BackgroundIn this clinical commentary, we highlight issues related to conceptual foundations and methods used in training load and injury research. We focus on sources of degrees of freedom that can favor questionable research practices such as P hacking and hypothesizing after the results are known, which can undermine the trustworthiness of research findings.Clinical questionIs the methodological rigor of studies in the training load and injury field sufficient to inform training-related decisions in clinical practice?Key resultsThe absence of a clear conceptual framework, causal structure, and reliable methods can promote questionable research practices, selective reporting, and confirmation bias. The fact that well-accepted training principles (eg, overload progression) are in line with some study findings may simply be a consequence of confirmation bias, resulting from cherry picking and emphasizing results that align with popular beliefs. Identifying evidence-based practical applications, grounded in high-quality research, is not currently possible. The strongest recommendation we can make for the clinician is grounded in common sense: "Do not train too much, too soon"-not because it has been confirmed by studies, but because it reflects accepted generic training principles.Clinical applicationThe training load and injury research field has fundamental conceptual and methodological weaknesses. Therefore, making decisions about planning and modifying training programs for injury reduction in clinical practice, based on available studies, is premature. Clinicians should continue to rely on best practice, experience, and well-known training principles, and consider the potential influence of contextual factors when planning and monitoring training loads. J Orthop Sports Phys Ther 2020;50(10):577-584. Epub 1 Aug 2020. doi:10.2519/jospt.2020.9211

    Herded Gibbs Sampling

    No full text
    The Gibbs sampler is one of the most popular algorithms for inference in statistical models. In this paper, we introduce a herding variant of this algorithm, called herded Gibbs, that is entirely deterministic. We prove that herded Gibbs has an O(1/T) convergence rate for models with independent variables and for fully connected probabilistic graphical models. Herded Gibbs is shown to outperform Gibbs in the tasks of image denoising with MRFs and named entity recognition with CRFs. However, the convergence for herded Gibbs for sparsely connected probabilistic graphical models is still an open problem
    corecore