5,096 research outputs found
Polaron effective mass from Monte Carlo simulations
A new Monte Carlo algorithm for calculating polaron effective mass is
proposed. It is based on the path-integral representation of a partial
partition function with fixed total quasi-momentum. Phonon degrees of freedom
are integrated out analytically resulting in a single-electron system with
retarded self-interaction and open boundary conditions in imaginary time. The
effective mass is inversely proportional to the covariance of total energy
calculated on an electron trajectory and the square distance between ends of
the trajectory. The method has no limitations on values of model parameters and
on the size and dimensionality of the system although large statistics is
required for stable numerical results. The method is tested on the
one-dimensional Holstein model for which simulation results are presented.Comment: 4 pages + 1 figure, RevTeX. Accepted for publication as a Rapid
Communication in Phys.Rev.
Recommended from our members
Holistic facial composite systems: are they compatible with witness recall?
Facial composite systems offer a particular challenge to human-computer interaction as they must facilitate several cognitively complex tasks and also aid communication between the operator and the witness. This paper presents the findings from a survey conducted with UK police composite operators that explored some of the issues involved in composite construction. A particular emphasis was placed on the information that witnesses report and its compatibility with both the composite system interface and the underlying construction method used by the system
On the Localization of One-Photon States
Single photon states with arbitrarily fast asymptotic power-law fall-off of
energy density and photodetection rate are explicitly constructed. This goes
beyond the recently discovered tenth power-law of the Hellwarth-Nouchi photon
which itself superseded the long-standing seventh power-law of the Amrein
photon.Comment: 7 pages, tex, no figure
Pif1 Helicase Lengthens Some Okazaki Fragment Flaps Necessitating Dna2 Nuclease/Helicase Action in the Two-nuclease Processing Pathway
We have developed a system to reconstitute all of the proposed steps of Okazaki fragment processing using purified yeast proteins and model substrates. DNA polymerase δ was shown to extend an upstream fragment to displace a downstream fragment into a flap. In most cases, the flap was removed by flap endonuclease 1 (FEN1), in a reaction required to remove initiator RNA in vivo. The nick left after flap removal could be sealed by DNA ligase I to complete fragment joining. An alternative pathway involving FEN1 and the nuclease/helicase Dna2 has been proposed for flaps that become long enough to bind replication protein A (RPA). RPA binding can inhibit FEN1, but Dna2 can shorten RPA-bound flaps so that RPA dissociates. Recent reconstitution results indicated that Pif1 helicase, a known component of fragment processing, accelerated flap displacement, allowing the inhibitory action of RPA. In results presented here, Pif1 promoted DNA polymerase δ to displace strands that achieve a length to bind RPA, but also to be Dna2 substrates. Significantly, RPA binding to long flaps inhibited the formation of the final ligation products in the reconstituted system without Dna2. However, Dna2 reversed that inhibition to restore efficient ligation. These results suggest that the two-nuclease pathway is employed in cells to process long flap intermediates promoted by Pif1
Approaches to multiplicity in publicly funded pragmatic randomised controlled trials:a survey of clinical trials units and a rapid review of published trials
BACKGROUND: Opinions and practices vary around the issue of performing multiple statistical tests in randomised controlled trials (RCTs). We carried out a study to collate information about opinions and practices using a methodological rapid review and a survey, specifically of publicly funded pragmatic RCTs that are not seeking marketing authorisation. The aim was to identify the circumstances under which researchers would make a statistical adjustment for multiplicity. METHODS: A review was performed extracting information from articles reporting primary analyses of pragmatic RCTs in one of seven high quality medical journals, in January to June (inclusive) 2018. A survey (Survey Monkey) eliciting opinions and practices around multiplicity was distributed to the 47 registered clinical trials units (CTUs) in the UK. RESULTS: One hundred and thirty-eight RCTs were included in the review, and survey responses were received from 27/47 (57%) CTUs. Both the review and survey indicated that adjusting for multiplicity was considered most important for multiple treatment comparisons; adjustment was performed for 11/23 (48%) published trials, and 24/27 (89%) CTU statisticians reported they would consider adjustment. Opinions and practices varied around adjustment for multiplicity arising from multiple primary outcomes and interim analyses. Adjustment was considered less important for multiplicity due to multiple secondary outcomes (adjustment performed for 17/136 [13%] published trials and 3/27 [11%] CTU statisticians would consider adjustment) and subgroup analyses (8/85 [9%] published trials adjusted and 6/27 CTU [22%] statisticians would consider adjustment). CONCLUSIONS: There is variation in opinions about adjustment for multiplicity among both statisticians reporting RCTs and applied statisticians working in CTUs. Further guidance is needed on the circumstances in which adjustment should be considered in relation to primary trial hypotheses, and if there are any situations in which adjustment would be recommended in the context of secondary analyses. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-022-01525-9
Photometric analysis of a space shuttle water venting
Presented here is a preliminary interpretation of a recent experiment conducted on Space Shuttle Discovery (Mission STS 29) in which a stream of liquid supply water was vented into space at twilight. The data consist of video images of the sunlight-scattering water/ice particle cloud that formed, taken by visible light-sensitive intensified cameras both onboard the spacecraft and at the AMOS ground station near the trajectory's nadir. This experiment was undertaken to study the phenomenology of water columns injected into the low-Earth orbital environment, and to provide information about the lifetime of ice particles that may recontact Space Shuttle orbits later. The findings about the composition of the cloud have relevance to ionospheric plasma depletion experiments and to the dynamics of the interaction of orbiting spacecraft with the environment
Elasticity of Stiff Polymer Networks
We study the elasticity of a two-dimensional random network of rigid rods
(``Mikado model''). The essential features incorporated into the model are the
anisotropic elasticity of the rods and the random geometry of the network. We
show that there are three distinct scaling regimes, characterized by two
distinct length scales on the elastic backbone. In addition to a critical
rigidiy percolation region and a homogeneously elastic regime we find a novel
intermediate scaling regime, where elasticity is dominated by bending
deformations.Comment: 4 pages, 4 figure
A compositional monitoring framework for hard real-time systems
Runtime Monitoring of hard real-time embedded systems is a promising technique for ensuring that a running system respects timing constraints, possibly combined with faults originated by the software and/or hardware. This is particularly important when we have real-time embedded systems made of several components that must combine different levels of criticality, and different levels of correctness requirements. This paper introduces a compositional monitoring framework coupled with guarantees that include time isolation and the response time of a monitor for a predicted violation. The kind of monitors that we propose are automatically generated by synthesizing logic formulas of a timed temporal logic, and their correctness is ensured by construction.This work was partially supported by National Funds through FCT (Portuguese Foundation for Science and Technology) and by ERDF (European Regional Development Fund) through COMPETE (Operational Programme ’Thematic Factors of Competitiveness’), within projects Ref. FCOMP-01-0124-FEDER-022701 (CISTER), FCOMP-01-0124- FEDER-015006 (VIPCORE) and FCOMP-01-0124-FEDER-020486 (AVIACC)
Theory of continuum percolation II. Mean field theory
I use a previously introduced mapping between the continuum percolation model
and the Potts fluid to derive a mean field theory of continuum percolation
systems. This is done by introducing a new variational principle, the basis of
which has to be taken, for now, as heuristic. The critical exponents obtained
are , and , which are identical with the mean
field exponents of lattice percolation. The critical density in this
approximation is \rho_c = 1/\ve where \ve = \int d \x \, p(\x) \{ \exp [-
v(\x)/kT] - 1 \}. p(\x) is the binding probability of two particles
separated by \x and v(\x) is their interaction potential.Comment: 25 pages, Late
- …