818 research outputs found
Why You Should Always Include a Random Slope for the Lower-Level Variable Involved in a Cross-Level Interaction
Mixed-effects multilevel models are often used to investigate cross-level interactions, a specific type of context effect that may be understood as an upper-level variable moderating the association between a lower-level predictor and the outcome. We argue that multilevel models involving cross-level interactions should always include random slopes on the lower-level components of those interactions. Failure to do so will usually result in severely anti-conservative statistical inference. We illustrate the problem with extensive Monte Carlo simulations and examine its practical relevance by studying 30 prototypical cross-level interactions with European Social Survey data for 28 countries. In these empirical applications, introducing a random slope term reduces the absolute t-ratio of the cross-level interaction term by 31 per cent or more in three quarters of cases, with an average reduction of 42 per cent. Many practitioners seem to be unaware of these issues. Roughly half of the cross-level interaction estimates published in the European Sociological Review between 2011 and 2016 are based on models that omit the crucial random slope term. Detailed analysis of the associated test statistics suggests that many of the estimates would not reach conventional thresholds for statistical significance in correctly specified models that include the random slope. This raises the question how much robust evidence of cross-level interactions sociology has actually produced over the past decades
Classification Tree Models for Predicting Distributions of Michigan Stream Fish from Landscape Variables
Traditionally, fish habitat requirements have been described from local‐scale environmental variables. However, recent studies have shown that studying landscape‐scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape‐scale habitat variables. We used classification trees and landscape‐scale habitat variables to create and validate presence‐absence models and relative abundance models for Michigan stream fishes. We developed 93 presence‐absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/141481/1/tafs0976.pd
Multilevel Analysis with Few Clusters: Improving Likelihood-based Methods to Provide Unbiased Estimates and Accurate Inference
Quantitative comparative social scientists have long worried about the performance of multilevel models when the number of upper-level units is small. Adding to these concerns, an influential Monte Carlo study by Stegmueller (2013) suggests that standard maximum-likelihood (ML) methods yield biased point estimates and severely anti-conservative inference with few upper-level units. In this article, the authors seek to rectify this negative assessment. First, they show that ML estimators of coefficients are unbiased in linear multilevel models. The apparent bias in coefficient estimates found by Stegmueller can be attributed to Monte Carlo Error and a flaw in the design of his simulation study. Secondly, they demonstrate how inferential problems can be overcome by using restricted ML estimators for variance parameters and a t-distribution with appropriate degrees of freedom for statistical inference. Thus, accurate multilevel analysis is possible within the framework that most practitioners are familiar with, even if there are only a few upper-level units
Comparison between estimation of breeding values and fixed effects using Bayesian and empirical BLUP estimation under selection on parents and missing pedigree information
Bayesian (via Gibbs sampling) and empirical BLUP (EBLUP) estimation of fixed effects and breeding values were compared by simulation. Combinations of two simulation models (with or without effect of contemporary group (CG)), three selection schemes (random, phenotypic and BLUP selection), two levels of heritability (0.20 and 0.50) and two levels of pedigree information (0% and 15% randomly missing) were considered. Populations consisted of 450 animals spread over six discrete generations. An infinitesimal additive genetic animal model was assumed while simulating data. EBLUP and Bayesian estimates of CG effects and breeding values were, in all situations, essentially the same with respect to Spearman's rank correlation between true and estimated values. Bias and mean square error (MSE) of EBLUP and Bayesian estimates of CG effects and breeding values showed the same pattern over the range of simulated scenarios. Methods were not biased by phenotypic and BLUP selection when pedigree information was complete, albeit MSE of estimated breeding values increased for situations where CG effects were present. Estimation of breeding values by Bayesian and EBLUP was similarly affected by joint effect of phenotypic or BLUP selection and randomly missing pedigree information. For both methods, bias and MSE of estimated breeding values and CG effects substantially increased across generations
Long-term care policy: What the United States can learn from Denmark, Sweden, and the Netherlands
Paying for long-term care consumes a substantial, and growing, part of the spending on healthcare in the U.S. We examine the components and payment systems for long-term care systems in Denmark, Sweden and the Netherlands to determine what policy makers in the U.S. can learn from these countries about how to improve long-term care provision and financing in the U.S
Modeling mutant phenotypes and oscillatory dynamics in the Saccharomyces cerevisiae cAMP-PKA pathway
Background The cyclic AMP-Protein Kinase A (cAMP-PKA) pathway is an evolutionarily conserved signal transduction mechanism that regulates cellular growth and differentiation in animals and fungi. We present a mathematical model that recapitulates the short-term and long-term dynamics of this pathway in the budding yeast, Saccharomyces cerevisiae. Our model is aimed at recapitulating the dynamics of cAMP signaling for wild-type cells as well as single (pde1Δ and pde2Δ) and double (pde1Δpde2Δ) phosphodiesterase mutants. Results Our model focuses on PKA-mediated negative feedback on the activity of phosphodiesterases and the Ras branch of the cAMP-PKA pathway. We show that both of these types of negative feedback are required to reproduce the wild-type signaling behavior that occurs on both short and long time scales, as well as the the observed responses of phosphodiesterase mutants. A novel feature of our model is that, for a wide range of parameters, it predicts that intracellular cAMP concentrations should exhibit decaying oscillatory dynamics in their approach to steady state following glucose stimulation. Experimental measurements of cAMP levels in two genetic backgrounds of S. cerevisiae confirmed the presence of decaying cAMP oscillations as predicted by the model. Conclusions Our model of the cAMP-PKA pathway provides new insights into how yeast respond to alterations in their nutrient environment. Because the model has both predictive and explanatory power it will serve as a foundation for future mathematical and experimental studies of this important signaling network
Resilience support in software-defined networking:a survey
Software-defined networking (SDN) is an architecture for computer networking that provides a clear separation between network control functions and forwarding operations. The abstractions supported by this architecture are intended to simplify the implementation of several tasks that are critical to network operation, such as routing and network management. Computer networks have an increasingly important societal role, requiring them to be resilient to a range of challenges. Previously, research into network resilience has focused on the mitigation of several types of challenges, such as natural disasters and attacks. Capitalizing on its benefits, including increased programmability and a clearer separation of concerns, significant attention has recently focused on the development of resilience mechanisms that use software-defined networking approaches. In this article, we present a survey that provides a structured overview of the resilience support that currently exists in this important area. We categorize the most recent research on this topic with respect to a number of resilience disciplines. Additionally, we discuss the lessons learned from this investigation, highlight the main challenges faced by SDNs moving forward, and outline the research trends in terms of solutions to mitigate these challenges
On the Delta set and catenary degree of Krull monoids with infinite cyclic divisor class group
AbstractLet M be a Krull monoid with divisor class group Z, and let S⊆Z denote the set of divisor classes of M which contain prime divisors. We find conditions on S equivalent to the finiteness of both Δ(M), the Delta set of M, and c(M), the catenary degree of M. In the finite case, we obtain explicit upper bounds on maxΔ(M) and c(M). Our methods generalize and complement a previous result concerning the elasticity of M
Analytical sensitivity of COVID-19 rapid antigen tests: A case for a robust reference standard
Aggressive diagnostic testing remains an indispensable strategy for health and aged care facilities to prevent the transmission of SARS-CoV-2 in vulnerable populations. The preferred diagnostic platform has shifted towards COVID-19 rapid antigen tests (RATs) to identify the most infectious individuals. As such, RATs are being manufactured faster than at any other time in our history yet lack the relevant quantitative analytics required to inform on absolute analytical sensitivity enabling manufacturers to maintain high batch-to-batch reproducibility, and end-users to accurately compare brands for decision making. Here, we describe a novel reference standard to measure and compare the analytical sensitivity of RATs using a recombinant GFP-tagged nucleocapsid protein (NP-GFP). Importantly, we show that the GFP tag does not interfere with NP detection and provides several advantages affording streamlined protein expression and purification in high yields as well as faster, cheaper and more sensitive quality control measures for post-production assessment of protein solubility and stability. Ten commercial COVID-19 RATs were evaluated and ranked using NP-GFP as a reference standard. Analytical sensitivity data of the selected devices as determined with NP-GFP did not correlate with those reported by the manufacturers using the median tissue culture infectious dose (TCID50) assay. Of note, TCID50 discordance has been previously reported. Taken together, our results highlight an urgent need for a reliable reference standard for evaluation and benchmarking of the analytical sensitivity of RAT devices. NP-GFP is a promising candidate as a reference standard that will ensure that RAT performance is accurately communicated to healthcare providers and the public
- …