1,542 research outputs found
Bayesian comparison of latent variable models: Conditional vs marginal likelihoods
Typical Bayesian methods for models with latent variables (or random effects)
involve directly sampling the latent variables along with the model parameters.
In high-level software code for model definitions (using, e.g., BUGS, JAGS,
Stan), the likelihood is therefore specified as conditional on the latent
variables. This can lead researchers to perform model comparisons via
conditional likelihoods, where the latent variables are considered model
parameters. In other settings, however, typical model comparisons involve
marginal likelihoods where the latent variables are integrated out. This
distinction is often overlooked despite the fact that it can have a large
impact on the comparisons of interest. In this paper, we clarify and illustrate
these issues, focusing on the comparison of conditional and marginal Deviance
Information Criteria (DICs) and Watanabe-Akaike Information Criteria (WAICs) in
psychometric modeling. The conditional/marginal distinction corresponds to
whether the model should be predictive for the clusters that are in the data or
for new clusters (where "clusters" typically correspond to higher-level units
like people or schools). Correspondingly, we show that marginal WAIC
corresponds to leave-one-cluster out (LOcO) cross-validation, whereas
conditional WAIC corresponds to leave-one-unit out (LOuO). These results lead
to recommendations on the general application of the criteria to models with
latent variables.Comment: Manuscript in press at Psychometrika; 31 pages, 8 figure
Rapid method for determining nitrogen in tantalum and niobium alloys
Adaptation of commercial instrument which measures nitrogen and oxygen in steel gave results in less than four minutes. Sample is heated in helium atmosphere in single-use graphite crucible. Platinum flux facilitates melting of sample. Released gases are separated chromatographically and measured in thermal-conductivity cell
Comparison of inert-gas-fusion and modified Kjeldahl techniques for determination of nitrogen in niobium alloys
This report compares results obtained for the determination of nitrogen in a selected group of niobium-base alloys by the inert-gas-fusion and the Kjeldahl procedures. In the inert-gas-fusion procedure the sample is heated to approximately 2700 C in a helium atmosphere in a single-use graphite crucible. A platinum flux is used to facilitate melting of the sample. The Kjeldahl method consisted of a rapid decomposition with a mixture of hydrofluoric acid, phosphoric acid, and potassium chromate; distillation in the presence of sodium hydroxide; and highly sensitive spectrophotometry with nitroprusside-catalyzed indophenol. In the 30- to 80-ppm range, the relative standard deviation was 5 to 7 percent for the inert-gas-fusion procedure and 2 to 8 percent for the Kjeldahl procedure. The agreement of the nitrogen results obtained by the two techniques is considered satisfactory
Numerical computation of transonic flows by finite-element and finite-difference methods
Studies on applications of the finite element approach to transonic flow calculations are reported. Different discretization techniques of the differential equations and boundary conditions are compared. Finite element analogs of Murman's mixed type finite difference operators for small disturbance formulations were constructed and the time dependent approach (using finite differences in time and finite elements in space) was examined
Community practice and religion at an Early Islamic cemetery in highland Central Asia
Archaeological studies of Early Islamic communities in Central Asia have focused on lowland urban communities. Here, the authors report on recent geophysical survey and excavation of an Early Islamic cemetery at Tashbulak in south-eastern Uzbekistan. AMS dating places the establishment of the cemetery in the mid-eighth century AD, making it one of the earliest Islamic burial grounds documented in Central Asia. Burials at Tashbulak conform to Islamic prescriptions for grave form and body deposition. The consistency in ritual suggests the existence of a funerary community of practice, challenging narratives of Islamic conversion in peripheral areas as a process of slow diffusion and emphasising the importance of archaeological approaches for documenting the diversity of Early Islamic communities.Introduction Background and rationale The site of Tashbulak Islamic burial The Tashbulak cemetery - Burial forms - Body treatment - Demographic profile - Chronology Discussion Conclusio
Convergence acceleration of implicit schemes in the presence of high aspect ratio grid cells
The performance of Navier-Stokes codes are influenced by several phenomena. For example, the robustness of the code may be compromised by the lack of grid resolution, by a need for more precise initial conditions or because all or part of the flowfield lies outside the flow regime in which the algorithm converges efficiently. A primary example of the latter effect is the presence of extended low Mach number and/or low Reynolds number regions which cause convergence deterioration of time marching algorithms. Recent research into this problem by several workers including the present authors has largely negated this difficulty through the introduction of time-derivative preconditioning. In the present paper, we employ the preconditioned algorithm to address convergence difficulties arising from sensitivity to grid stretching and high aspect ratio grid cells. Strong grid stretching is particularly characteristic of turbulent flow calculations where the grid must be refined very tightly in the dimension normal to the wall, without a similar refinement in the tangential direction. High aspect ratio grid cells also arise in problems that involve high aspect ratio domains such as combustor coolant channels. In both situations, the high aspect ratio cells can lead to extreme deterioration in convergence. It is the purpose of the present paper to address the reasons for this adverse response to grid stretching and to suggest methods for enhancing convergence under such circumstances. Numerical algorithms typically possess a maximum allowable or optimum value for the time step size, expressed in non-dimensional terms as a CFL number or vonNeumann number (VNN). In the presence of high aspect ratio cells, the smallest dimension of the grid cell controls the time step size causing it to be extremely small, which in turn results in the deterioration of convergence behavior. For explicit schemes, this time step limitation cannot be exceeded without violating stability restrictions of the scheme. On the other hand, for implicit schemes, which are typically unconditionally stable, there appears to be room for improvement through careful tailoring of the time step definition based on results of linear stability analyses. In the present paper, we focus on the central-differenced alternating direction implicit (ADI) scheme. The understanding garnered from this analyses can then be applied to other implicit schemes. In order to systematically study the effects of aspect ratio and the methods of mitigating the associated problems, we use a two pronged approach. We use stability analyses as a tool for predicting numerical convergence behavior and numerical experiments on simple model problems to verify predicted trends. Based on these analyses, we determine that efficient convergence may be obtained at all aspect ratios by getting a combination of things right. Primary among these are the proper definition of the time step size, proper selection of viscous preconditioner and the precise treatment of boundary conditions. These algorithmic improvements are then applied to a variety of test cases to demonstrate uniform convergence at all aspect ratios
Item Response Models of Probability Judgments: Application to a Geopolitical Forecasting Tournament
In this article, we develop and study methods for evaluating forecasters and forecasting questions in dynamic environments. These methods, based on item response models, are useful in situations where items vary in difficulty, and we wish to evaluate forecasters based on the difficulty of the items that they forecasted correctly. In addition, the methods are useful in situations where we need to compare forecasters who make predictions at different points in time or for different items. We first extend traditional models to handle subjective probabilities, and we then apply a specific model to geopolitical forecasts. We evaluate the model’s ability to accommodate the data, compare the model’s estimates of forecaster ability to estimates of forecaster ability based on scoring rules, and externally validate the model’s item estimates. We also highlight some shortcomings of the traditional models and discuss some further extensions. The analyses illustrate the models’ potential for widespread use in forecasting and subjective probability evaluation
- …