3,119 research outputs found

    Matching factors for Delta S=1 four-quark operators in RI/SMOM schemes

    Full text link
    The non-perturbative renormalization of four-quark operators plays a significant role in lattice studies of flavor physics. For this purpose, we define regularization-independent symmetric momentum-subtraction (RI/SMOM) schemes for Delta S=1 flavor-changing four-quark operators and provide one-loop matching factors to the MS-bar scheme in naive dimensional regularization. The mixing of two-quark operators is discussed in terms of two different classes of schemes. We provide a compact expression for the finite one-loop amplitudes which allows for a straightforward definition of further RI/SMOM schemes.Comment: 22 pages, 5 figure

    A simple principle concerning the robustness of protein complex activity to changes in gene expression

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The functions of a eukaryotic cell are largely performed by multi-subunit protein complexes that act as molecular machines or information processing modules in cellular networks. An important problem in systems biology is to understand how, in general, these molecular machines respond to perturbations.</p> <p>Results</p> <p>In yeast, genes that inhibit growth when their expression is reduced are strongly enriched amongst the subunits of multi-subunit protein complexes. This applies to both the core and peripheral subunits of protein complexes, and the subunits of each complex normally have the same loss-of-function phenotypes. In contrast, genes that inhibit growth when their expression is increased are not enriched amongst the core or peripheral subunits of protein complexes, and the behaviour of one subunit of a complex is not predictive for the other subunits with respect to over-expression phenotypes.</p> <p>Conclusion</p> <p>We propose the principle that the overall activity of a protein complex is in general robust to an increase, but not to a decrease in the expression of its subunits. This means that whereas phenotypes resulting from a decrease in gene expression can be predicted because they cluster on networks of protein complexes, over-expression phenotypes cannot be predicted in this way. We discuss the implications of these findings for understanding how cells are regulated, how they evolve, and how genetic perturbations connect to disease in humans.</p

    Head-on collisions of boson stars

    Full text link
    We study head-on collisions of boson stars in three dimensions. We consider evolutions of two boson stars which may differ in their phase or have opposite frequencies but are otherwise identical. Our studies show that these phase differences result in different late time behavior and gravitational wave output

    The Influence of Stellar Wind Variability on Measurements of Interstellar O VI Along Sightlines to Early-Type Stars

    Full text link
    A primary goal of the FUSE mission is to understand the origin of the O VI ion in the interstellar medium of the Galaxy and the Magellanic Clouds. Along sightlines to OB-type stars, these interstellar components are usually blended with O VI stellar wind profiles, which frequently vary in shape. In order to assess the effects of this time-dependent blending on measurements of the interstellar O VI lines, we have undertaken a mini-survey of repeated observations toward OB-type stars in the Galaxy and the Large Magellanic Cloud. These sparse time series, which consist of 2-3 observations separated by intervals ranging from a few days to several months, show that wind variability occurs commonly in O VI (about 60% of a sample of 50 stars), as indeed it does in other resonance lines. However, in the interstellar O VI λ\lambda1032 region, the O VI λ\lambda1038 wind varies only in \sim30% of the cases. By examining cases exhibiting large amplitude variations, we conclude that stellar-wind variability {\em generally} introduces negligible uncertainty for single interstellar O VI components along Galactic lines of sight, but can result in substantial errors in measurements of broader components or blends of components like those typically observed toward stars in the Large Magellanic Cloud. Due to possible contamination by discrete absorption components in the stellar O VI line, stars with terminal velocities greater than or equal to the doublet separation (1654 km/s) should be treated with care.Comment: Accepted for publication in the Astrophysical Journal Lette

    AMR, stability and higher accuracy

    Full text link
    Efforts to achieve better accuracy in numerical relativity have so far focused either on implementing second order accurate adaptive mesh refinement or on defining higher order accurate differences and update schemes. Here, we argue for the combination, that is a higher order accurate adaptive scheme. This combines the power that adaptive gridding techniques provide to resolve fine scales (in addition to a more efficient use of resources) together with the higher accuracy furnished by higher order schemes when the solution is adequately resolved. To define a convenient higher order adaptive mesh refinement scheme, we discuss a few different modifications of the standard, second order accurate approach of Berger and Oliger. Applying each of these methods to a simple model problem, we find these options have unstable modes. However, a novel approach to dealing with the grid boundaries introduced by the adaptivity appears stable and quite promising for the use of high order operators within an adaptive framework

    Hamiltonian Relaxation

    Full text link
    Due to the complexity of the required numerical codes, many of the new formulations for the evolution of the gravitational fields in numerical relativity are not tested on binary evolutions. We introduce in this paper a new testing ground for numerical methods based on the simulation of binary neutron stars. This numerical setup is used to develop a new technique, the Hamiltonian relaxation (HR), that is benchmarked against the currently most stable simulations based on the BSSN method. We show that, while the length of the HR run is somewhat shorter than the equivalent BSSN simulation, the HR technique improves the overall quality of the simulation, not only regarding the satisfaction of the Hamiltonian constraint, but also the behavior of the total angular momentum of the binary. The latest quantity agrees well with post-Newtonian estimations for point-mass binaries in circular orbits.Comment: More detailed description of the numerical implementation added and some typos corrected. Version accepted for publication in Class. and Quantum Gravit

    The discrete energy method in numerical relativity: Towards long-term stability

    Full text link
    The energy method can be used to identify well-posed initial boundary value problems for quasi-linear, symmetric hyperbolic partial differential equations with maximally dissipative boundary conditions. A similar analysis of the discrete system can be used to construct stable finite difference equations for these problems at the linear level. In this paper we apply these techniques to some test problems commonly used in numerical relativity and observe that while we obtain convergent schemes, fast growing modes, or ``artificial instabilities,'' contaminate the solution. We find that these growing modes can partially arise from the lack of a Leibnitz rule for discrete derivatives and discuss ways to limit this spurious growth.Comment: 18 pages, 22 figure

    Evaluating skills and issues of quantile-based bias adjustment for climate change scenarios

    Get PDF
    Daily meteorological data such as temperature or precipitation from climate models are needed for many climate impact studies, e.g., in hydrology or agriculture, but direct model output can contain large systematic errors. A large variety of methods exist to adjust the bias of climate model outputs. Here we review existing statistical bias-adjustment methods and their shortcomings, and compare quantile mapping (QM), scaled distribution mapping (SDM), quantile delta mapping (QDM) and an empiric version of PresRAT (PresRATe). We then test these methods using real and artificially created daily temperature and precipitation data for Austria. We compare the performance in terms of the following demands: (1) the model data should match the climatological means of the observational data in the historical period; (2) the long-term climatological trends of means (climate change signal), either defined as difference or as ratio, should not be altered during bias adjustment; and (3) even models with too few wet days (precipitation above 0.1 mm) should be corrected accurately, so that the wet day frequency is conserved. QDM and PresRATe combined fulfill all three demands. For (2) for precipitation, PresRATe already includes an additional correction that assures that the climate change signal is conserved.</p
    corecore