4,168 research outputs found

    Personality Assessment, Forced-Choice.

    Get PDF
    Instead of responding to questionnaire items one at a time, respondents may be forced to make a choice between two or more items measuring the same or different traits. The forced-choice format eliminates uniform response biases, although the research on its effectiveness in reducing the effects of impression management is inconclusive. Until recently, forced-choice questionnaires were scaled in relation to person means (ipsative data), providing information for intra-individual assessments only. Item response modeling enabled proper scaling of forced-choice data, so that inter-individual comparisons may be made. New forced-choice applications in personality assessment and directions for future research are discussed

    Targeting Bayes factors with direct-path non-equilibrium thermodynamic integration

    Get PDF
    Thermodynamic integration (TI) for computing marginal likelihoods is based on an inverse annealing path from the prior to the posterior distribution. In many cases, the resulting estimator suffers from high variability, which particularly stems from the prior regime. When comparing complex models with differences in a comparatively small number of parameters, intrinsic errors from sampling fluctuations may outweigh the differences in the log marginal likelihood estimates. In the present article, we propose a thermodynamic integration scheme that directly targets the log Bayes factor. The method is based on a modified annealing path between the posterior distributions of the two models compared, which systematically avoids the high variance prior regime. We combine this scheme with the concept of non-equilibrium TI to minimise discretisation errors from numerical integration. Results obtained on Bayesian regression models applied to standard benchmark data, and a complex hierarchical model applied to biopathway inference, demonstrate a significant reduction in estimator variance over state-of-the-art TI methods

    Uncertainty quantification of coal seam gas production prediction using Polynomial Chaos

    Full text link
    A surrogate model approximates a computationally expensive solver. Polynomial Chaos is a method to construct surrogate models by summing combinations of carefully chosen polynomials. The polynomials are chosen to respect the probability distributions of the uncertain input variables (parameters); this allows for both uncertainty quantification and global sensitivity analysis. In this paper we apply these techniques to a commercial solver for the estimation of peak gas rate and cumulative gas extraction from a coal seam gas well. The polynomial expansion is shown to honour the underlying geophysics with low error when compared to a much more complex and computationally slower commercial solver. We make use of advanced numerical integration techniques to achieve this accuracy using relatively small amounts of training data

    Improved methods for the mathematically controlled comparison of biochemical systems

    Get PDF
    The method of mathematically controlled comparison provides a structured approach for the comparison of alternative biochemical pathways with respect to selected functional effectiveness measures. Under this approach, alternative implementations of a biochemical pathway are modeled mathematically, forced to be equivalent through the application of selected constraints, and compared with respect to selected functional effectiveness measures. While the method has been applied successfully in a variety of studies, we offer recommendations for improvements to the method that (1) relax requirements for definition of constraints sufficient to remove all degrees of freedom in forming the equivalent alternative, (2) facilitate generalization of the results thus avoiding the need to condition those findings on the selected constraints, and (3) provide additional insights into the effect of selected constraints on the functional effectiveness measures. We present improvements to the method and related statistical models, apply the method to a previously conducted comparison of network regulation in the immune system, and compare our results to those previously reported

    From statistical mechanics to machine learning: effective models for neural activity

    Get PDF
    In the retina, the activity of ganglion cells, which feed information through the optic nerve to the rest of the brain, is all that our brain will ever know about the visual world. The interactions between many neurons are essential to processing visual information and a growing body of evidence suggests that the activity of populations of retinal ganglion cells cannot be understood from knowledge of the individual cells alone. Modelling the probability of which cells in a population will fire or remain silent at any moment in time is a difficult problem because of the exponentially many possible states that can arise, many of which we will never even observe in finite recordings of retinal activity. To model this activity, maximum entropy models have been proposed which provide probabilistic descriptions over all possible states but can be fitted using relatively few well-sampled statistics. Maximum entropy models have the appealing property of being the least biased explanation of the available information, in the sense that they maximise the information theoretic entropy. We investigate this use of maximum entropy models and examine the population sizes and constraints that they require in order to learn nontrivial insights from finite data. Going beyond maximum entropy models, we investigate autoencoders, which provide computationally efficient means of simplifying the activity of retinal ganglion cells
    corecore