33,528 research outputs found
Urea-induced denaturation of PreQ1-riboswitch
Urea, a polar molecule with a large dipole moment, not only destabilizes the
folded RNA structures, but can also enhance the folding rates of large
ribozymes. Unlike the mechanism of urea-induced unfolding of proteins, which is
well understood, the action of urea on RNA has barely been explored. We
performed extensive all atom molecular dynamics (MD) simulations to determine
the molecular underpinnings of urea-induced RNA denaturation. Urea displays its
denaturing power in both secondary and tertiary motifs of the riboswitch (RS)
structure. Our simulations reveal that the denaturation of RNA structures is
mainly driven by the hydrogen bonds and stacking interactions of urea with the
bases. Through detailed studies of the simulation trajectories, we found that
geminate pairs between urea and bases due to hydrogen bonds and stacks persist
only ~ (0.1-1) ns, which suggests that urea-base interaction is highly dynamic.
Most importantly, the early stage of base pair disruption is triggered by
penetration of water molecules into the hydrophobic domain between the RNA
bases. The infiltration of water into the narrow space between base pairs is
critical in increasing the accessibility of urea to transiently disrupted
bases, thus allowing urea to displace inter base hydrogen bonds. This
mechanism, water-induced disruption of base-pairs resulting in the formation of
a "wet" destabilized RNA followed by solvation by urea, is the exact opposite
of the two-stage denaturation of proteins by urea. In the latter case, initial
urea penetration creates a dry-globule, which is subsequently solvated by water
penetration leading to global protein unfolding. Our work shows that the
ability to interact with both water and polar, non-polar components of
nucleotides makes urea a powerful chemical denaturant for nucleic acids.Comment: 41 pages, 18 figure
A remarkably simple and accurate method for computing the Bayes Factor from a Markov chain Monte Carlo Simulation of the Posterior Distribution in high dimension
Weinberg (2012) described a constructive algorithm for computing the marginal
likelihood, Z, from a Markov chain simulation of the posterior distribution.
Its key point is: the choice of an integration subdomain that eliminates
subvolumes with poor sampling owing to low tail-values of posterior
probability. Conversely, this same idea may be used to choose the subdomain
that optimizes the accuracy of Z. Here, we explore using the simulated
distribution to define a small region of high posterior probability, followed
by a numerical integration of the sample in the selected region using the
volume tessellation algorithm described in Weinberg (2012). Even more promising
is the resampling of this small region followed by a naive Monte Carlo
integration. The new enhanced algorithm is computationally trivial and leads to
a dramatic improvement in accuracy. For example, this application of the new
algorithm to a four-component mixture with random locations in 16 dimensions
yields accurate evaluation of Z with 5% errors. This enables Bayes-factor model
selection for real-world problems that have been infeasible with previous
methods.Comment: 14 pages, 3 figures, submitted to Bayesian Analysi
Theoretical correction to the neutral meson asymmetry
Certain types of asymmetries in neutral meson physics have not been treated
properly, ignoring the difference of normalization factors with an assumption
of the equality of total decay width. Since the corrected asymmetries in
meson are different from known asymmetries by a shift in the first order of CP-
and CPT-violation parameters, experimental data should be analyzed with the
consideration of this effect as in meson physics.Comment: 7 page
The metallicity dependence of envelope inflation in massive stars
Recently it has been found that models of massive stars reach the Eddington
limit in their interior, which leads to dilute extended envelopes. We perform a
comparative study of the envelope properties of massive stars at different
metallicities, with the aim to establish the impact of the stellar metallicity
on the effect of envelope inflation. We analyse published grids of
core-hydrogen burning massive star models computed with metallicities
appropriate for massive stars in the Milky Way, the LMC and the SMC, the very
metal poor dwarf galaxy I Zwicky 18, and for metal-free chemical composition.
Stellar models of all the investigated metallicities reach and exceed the
Eddington limit in their interior, aided by the opacity peaks of iron, helium
and hydrogen, and consequently develop inflated envelopes. Envelope inflation
leads to a redward bending of the zero-age main sequence and a broadening of
the main sequence band in the upper part of the Hertzsprung-Russell diagram. We
derive the limiting L/M-values as function of the stellar surface temperature
above which inflation occurs, and find them to be larger for lower metallicity.
While Galactic models show inflation above ~29 Msun, the corresponding mass
limit for Population III stars is ~150 Msun. While the masses of the inflated
envelopes are generally small, we find that they can reach 1-100 Msun in models
with effective temperatures below ~8000 K, with higher masses reached by models
of lower metallicity. Envelope inflation is expected to occur in sufficiently
massive stars at all metallicities, and is expected to lead to rapidly growing
pulsations, high macroturbulent velocities, and might well be related to the
unexplained variability observed in Luminous Blue Variables like S Doradus and
Eta Carina.Comment: 16 pages (with Appendix), accepted in A&
A new approach to analysing human-related accidents by combined use of HFACS and activity theory-based method
This study proposes a new method for modelling and analysing human-related accidents. It integrates HFACS (Human Factors Analysis and Classification System), which addresses most of the socio-technical system levels and offers a comprehensive failure taxonomy for analysing human errors, and AT (Activity Theory)-based approach, which provides an effective way for considering various contextual factors systematically in accident investigation. By combining them, the proposed method makes it more efficient to use the concepts and principles of AT. Additionally, it can help analysts use HFACS taxonomy more coherently to identify meaningful causal factors with a sound theoretical basis of human activities. Therefore, the proposed method can be effectively used to mitigate the limitations of traditional approaches to accident analysis, such as over-relying on a causality model and sticking to a root-cause, by making analysts look at an accident from a range of perspectives. To demonstrate the usefulness of the proposed method, we conducted a case study in nuclear power plants. Through the case study, we could confirm that it would be a useful method for modelling and analysing human-related accidents, enabling analysts to identify a plausible set of causal factors efficiently in a methodical consideration of contextual backgrounds surrounding human activities
- …