855 research outputs found
A Hybrid Bayesian Laplacian Approach for Generalized Linear Mixed Models
The analytical intractability of generalized linear mixed models (GLMMs) has generated a lot of research in the past two decades. Applied statisticians routinely face the frustrating prospect of widely disparate results produced by the methods that are currently implemented in commercially available software. This article is motivated by this frustration and develops guidance as well as new methods that are computationally efficient and statistically reliable. Two main classes of approximations have been developed: likelihood-based methods and Bayesian methods. Likelihood-based methods such as the penalized quasi-likelihood approach of Breslow and Clayton (1993) have been shown to produce biased estimates especially for binary clustered data with small clusters sizes. More recent methods such as the adaptive Gaussian quadrature approach perform well but can be overwhelmed by problems with large numbers of random effects, and efficient algorithms to better handle these situations have not yet been integrated in standard statistical packages. Similarly, Bayesian methods, though they have good frequentist properties when the model is correct, are known to be computationally intensive and also require specialized code, limiting their use in practice. In this article we build on our previous method (Capanu and Begg 2010) and propose a hybrid approach that provides a bridge between the likelihood-based and Bayesian approaches by employing Bayesian estimation for the variance compo- nents followed by Laplacian estimation for the regression coefficients with the goal of obtaining good statistical properties, with relatively good computing speed, and using widely available software. The hybrid approach is shown to perform well against the other competitors considered. Another impor- tant finding of this research is the surprisingly good performance of the Laplacian approximation in the difficult case of binary clustered data with small clusters sizes. We apply the methods to a real study of head and neck squamous cell carcinoma and illustrate their properties using simulations based on a widely-analyzed salamander mating dataset and on another important dataset involving the Guatemalan Child Health survey
Optimized Variable Selection Via Repeated Data Splitting
We introduce a new variable selection procedure that repeatedly splits the data into two sets, one for estimation and one for validation, to obtain an empirically optimized threshold which is then used to screen for variables to include in the final model. Simulation results show that the proposed variable selection technique enjoys superior performance compared to candidate methods, being amongst those with the lowest inclusion of noisy predictors while having the highest power to detect the correct model and being unaffected by correlations among the predictors. We illustrate the methods by applying them to a cohort of patients undergoing hepatectomy at our institution
Static and impact response of a single-span stone masonry arch
Unreinforced masonry structures are susceptible to man-made hazards such as impact and blast loading. However, the literature on this subject mainly focuses on masonry wall behavior, and there is a knowledge gap about the behavior of masonry arches under high-strain loading. In this context, this research aims to investigate both quasistatic and impact response of a dry-joint stone masonry arch using the discrete element method. Rigid blocks with noncohesive joint models are adopted to simulate dry-joint assemblages. First, the employed modeling strategy is validated utilizing the available experimental findings, and then sensitivity analyses are performed for both static and impact loading, considering the effect of joint friction angle, contact stiffness, and damping parameters. The outcomes of this research strengthen the existing knowledge in the literature regarding the computational modeling of masonry structures that are subjected to usual and extreme loading conditions. The results highlight that applied discontinuum-based numerical models are more sensitive to stiffness parameters in high-strain loading than static analysis
Comparing ROC Curves Derived From Regression Models
In constructing predictive models, investigators frequently assess the incremental value of a predictive marker by comparing the ROC curve generated from the predictive model including the new marker with the ROC curve from the model excluding the new marker. Many commentators have noticed empirically that a test of the two ROC areas often produces a non-significant result when a corresponding Wald test from the underlying regression model is significant. A recent article showed using simulations that the widely-used ROC area test [1] produces exceptionally conservative test size and extremely low power [2]. In this article we show why the ROC area test is invalid in this context. We demonstrate how a valid test of the ROC areas can be constructed that has comparable statistical properties to the Wald test. We conclude that using the Wald test to assess the incremental contribution of a marker remains the best strategy. We also examine the use of derived markers from non-nested models and the use of validation samples. We show that comparing ROC areas is invalid in these contexts as well
Static Properties of Quark Solitons
It has been conjectured that at distances smaller than the confinement scale
but large enough to allow for nonperturbative effects, QCD is described by an
effective chiral
Lagrangian. The soliton solutions of such a Lagrangian are extended objects
with spin . For , they are triplets of color and
flavor and have baryon number , to be identified as constituent
quarks. We investigate in detail the static properties of such
constituent-quark solitons for the simplest case . The mass
of these objects comes from the energy of the static soliton and from quantum
effects, described semiclassically by rotation of collective coordinates around
the classical solution. The quantum corrections tend to be large, but can be
controlled by exploring the Lagrangian's parameter space so as to maximize the
inertia tensor. We comment on the acceptable parameter space and discuss the
model's further predictive power.Comment: 8 pages + 1 PostScript figure; plain LaTe
Tensile fracture mechanism of masonry wallettes parallel to bed joints: A stochastic discontinuum analysis
Nonhomogeneous material characteristics of masonry lead to complex fracture mechanisms, which require substantial analysis regarding the influence of masonry constituents. In this context, this study presents a discontinuum modeling strategy, based on the discrete element method, developed to investigate the tensile fracture mechanism of masonry wallettes parallel to the bed joints considering the inherent variation in the material properties. The applied numerical approach utilizes polyhedral blocks to represent masonry and integrate the equations of motion explicitly to compute nodal velocities for each block in the system. The mechanical interaction between the adjacent blocks is computed at the active contact points, where the contact stresses are calculated and updated based on the implemented contact constitutive models. In this research, different fracture mechanisms of masonry wallettes under tension are explored developing at the unitâmortar interface and/or within the units. The contact properties are determined based on certain statistical variations. Emphasis is given to the influence of the material properties on the fracture mechanism and capacity of the masonry assemblages. The results of the analysis reveal and quantify the importance of the contact properties for unit and unitâmortar interfaces (e.g., tensile strength, cohesion, and friction coefficient) in terms of capacity and corresponding fracture mechanism for masonry wallettes.This research received no external funding
Analysis and prediction of masonry wallette strength under combined compression-bending via stochastic computational modeling
The out-of-plane flexural bending capacity of masonry is a fundamental property for understanding the behavior of masonry structures. This study investigates the behavior of unreinforced masonry wallettes subjected to combined compression-flexural loading using the discrete element method (DEM), and provides a novel framework to estimate the masonry strength. A simplified micro-modeling strategy is utilized to analyze a masonry wallette, including the variation of the mechanical properties in masonry units and joints. Stochastic DEM analyses are performed to simulate brickwork assemblages, assuming a strong unit-weak joint material model typical of most masonry buildings, including historical ones. Once the proposed computational approach is validated against the experimental findings, the effect of spatial and non-spatial variation of mechanical prop-erties is explored. Two failure types are identified: joint failure and brick failure. For each failure mechanism, the variability of the response and the effects of the modeling parameters on the load-carrying capacity is quantified. Afterward, Lasso regression is employed to determine predictive equations in terms of the material properties and vertical pressure on the wallette. The results show that the most important parameters changing the response are the joint tensile strength and the amount of vertical stress for joint failure, whereas the unit tensile strength dominates the response for brick failure. Overall, this research proposes a novel framework adopting validated advanced computational models that feed on simple test results to generate data that is further utilized for training response prediction models for complex structures.- (undefined
Sublinear-Time Algorithms for Monomer-Dimer Systems on Bounded Degree Graphs
For a graph , let be the partition function of the
monomer-dimer system defined by , where is the
number of matchings of size in . We consider graphs of bounded degree
and develop a sublinear-time algorithm for estimating at an
arbitrary value within additive error with high
probability. The query complexity of our algorithm does not depend on the size
of and is polynomial in , and we also provide a lower bound
quadratic in for this problem. This is the first analysis of a
sublinear-time approximation algorithm for a # P-complete problem. Our
approach is based on the correlation decay of the Gibbs distribution associated
with . We show that our algorithm approximates the probability
for a vertex to be covered by a matching, sampled according to this Gibbs
distribution, in a near-optimal sublinear time. We extend our results to
approximate the average size and the entropy of such a matching within an
additive error with high probability, where again the query complexity is
polynomial in and the lower bound is quadratic in .
Our algorithms are simple to implement and of practical use when dealing with
massive datasets. Our results extend to other systems where the correlation
decay is known to hold as for the independent set problem up to the critical
activity
- âŠ