2,722 research outputs found

    Constrained invariant mass distributions in cascade decays. The shape of the "mqllm_{qll}-threshold" and similar distributions

    Full text link
    Considering the cascade decay DcCcbBcbaAD\to c C \to c b B \to c b a A in which D,C,B,AD,C,B,A are massive particles and c,b,ac,b,a are massless particles, we determine for the first time the shape of the distribution of the invariant mass of the three massless particles mabcm_{abc} for the sub-set of decays in which the invariant mass mabm_{ab} of the last two particles in the chain is (optionally) constrained to lie inside an arbitrary interval, mab[mabcut min,mabcut max]m_{ab} \in [ m_{ab}^\text{cut min}, m_{ab}^\text{cut max}]. An example of an experimentally important distribution of this kind is the ``mqllm_{qll} threshold'' -- which is the distribution of the combined invariant mass of the visible standard model particles radiated from the hypothesised decay of a squark to the lightest neutralino via successive two body decay,: \squark \to q \ntlinoTwo \to q l \slepton \to q l l \ntlinoOne , in which the experimenter requires additionally that mllm_{ll} be greater than mllmax/2{m_{ll}^{max}}/\sqrt{2}. The location of the ``foot'' of this distribution is often used to constrain sparticle mass scales. The new results presented here permit the location of this foot to be better understood as the shape of the distribution is derived. The effects of varying the position of the mllm_{ll} cut(s) may now be seen more easily.Comment: 12 pages, 3 figure

    Improving estimates of the number of fake leptons and other mis-reconstructed objects in hadron collider events: BoB's your UNCLE. (Previously "The Matrix Method Reloaded")

    Get PDF
    We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic matrix method for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.Comment: v1 :11 pages, 5 figures. v2: title change requested by referee, and other corrections/clarifications found during review. v3: final tweaks suggested during review + move from revtex to jhep styl

    Dual Mutation Events in the Haemagglutinin-Esterase and Fusion Protein from an Infectious Salmon Anaemia Virus HPR0 Genotype Promote Viral Fusion and Activation by an Ubiquitous Host Protease

    Get PDF
    Funding: The Scottish Government funded this work, as part of their global budget on aquaculture research. The funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.Peer reviewedPublisher PD

    An adaptive multi-level simulation algorithm for stochastic biological systems

    Full text link
    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method (Anderson and Higham, Multiscale Model. Simul. 2012) tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ\tau. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel, adaptive time-stepping approach where τ\tau is chosen according to the stochastic behaviour of each sample path we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.Comment: 23 page

    The Economic Value of Rebuilding Fisheries

    Get PDF
    The global demand for protein from seafood –- whether wild, caught or cultured, whether for direct consumption or as feed for livestock –- is high and projected to continue growing. However, the ocean's ability to meet this demand is uncertain due to either mismanagement or, in some cases, lack of management of marine fish stocks. Efforts to rebuild and recover the world's fisheries will benefit from an improved understanding of the long-term economic benefits of recovering collapsed stocks, the trajectory and duration of different rebuilding approaches, variation in the value and timing of recovery for fisheries with different economic, biological, and regulatory characteristics, including identifying which fisheries are likely to benefit most from recovery, and the benefits of avoiding collapse in the first place. These questions are addressed in this paper using a dynamic bioeconomic optimisation model that explicitly accounts for economics, management, and ecology of size-structured exploited fish populations. Within this model framework, different management options (effort controls on small-, medium-, and large-sized fish) including management that optimises economic returns over a specified planning horizon are simulated and the consequences compared. The results show considerable economic gains from rebuilding fisheries, with magnitudes varying across fisheries

    A Storm in a "T" Cup

    Full text link
    We revisit the process of transversification and agglomeration of particle momenta that are often performed in analyses at hadron colliders, and show that many of the existing mass-measurement variables proposed for hadron colliders are far more closely related to each other than is widely appreciated, and indeed can all be viewed as a common mass bound specialized for a variety of purposes.Comment: 3 pages, 2 figures, presented by K.C. Kong at the 19th Particles and Nuclei International Conference, PANIC 2011, MIT, Cambridge, MA (July 24-29, 2011
    corecore