2,722 research outputs found
Constrained invariant mass distributions in cascade decays. The shape of the "-threshold" and similar distributions
Considering the cascade decay in which
are massive particles and are massless particles, we
determine for the first time the shape of the distribution of the invariant
mass of the three massless particles for the sub-set of decays in
which the invariant mass of the last two particles in the chain is
(optionally) constrained to lie inside an arbitrary interval, . An example of an experimentally
important distribution of this kind is the `` threshold'' -- which is
the distribution of the combined invariant mass of the visible standard model
particles radiated from the hypothesised decay of a squark to the lightest
neutralino via successive two body decay,: \squark \to q \ntlinoTwo \to q l
\slepton \to q l l \ntlinoOne , in which the experimenter requires
additionally that be greater than . The
location of the ``foot'' of this distribution is often used to constrain
sparticle mass scales. The new results presented here permit the location of
this foot to be better understood as the shape of the distribution is derived.
The effects of varying the position of the cut(s) may now be seen more
easily.Comment: 12 pages, 3 figure
Improving estimates of the number of fake leptons and other mis-reconstructed objects in hadron collider events: BoB's your UNCLE. (Previously "The Matrix Method Reloaded")
We consider current and alternative approaches to setting limits on new
physics signals having backgrounds from misidentified objects; for example jets
misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have
used a heuristic matrix method for estimating the background contribution from
such sources. We demonstrate that the matrix method suffers from statistical
shortcomings that can adversely affect its ability to set robust limits. A
rigorous alternative method is discussed, and is seen to produce fake rate
estimates and limits with better qualities, but is found to be too costly to
use. Having investigated the nature of the approximations used to derive the
matrix method, we propose a third strategy that is seen to marry the speed of
the matrix method to the performance and physicality of the more rigorous
approach.Comment: v1 :11 pages, 5 figures. v2: title change requested by referee, and
other corrections/clarifications found during review. v3: final tweaks
suggested during review + move from revtex to jhep styl
Recommended from our members
Ring identification and pattern recognition in ring imaging Cerenkov (RICH) detectors.
An algorithm for identifying rings in Ring Imaging Cherenkov (RICH) detectors is
described. The algorithm is necessarily Bayesian and makes use of a Metropolis-
Hastings Markov chain Monte Carlo sampler to locate the rings. In particular, the
sampler employs a novel proposal function whose form is responsible for significant
speed improvements over similar methods. The method is optimised for finding
multiple overlapping rings in detectors which can be modelled well by the LHbC
RICH toy model described herein
Recommended from our members
Constrained invariant mass distributions in cascade decays: The Shape of the 'm(qll)-threshold' and similar distributions.
Considering the cascade decay D → cC → cbB → cbaA in which D,C,B,A are
massive particles and c, b, a are massless particles, we determine for the shape of the
distribution of the invariant mass of the three massless particles mabc for the sub-set
of decays in which the invariant mass mab of the last two particles in the chain is (optionally)
constrained to lie inside an arbitrary interval, mab ∈ [mcut min
ab ,mcut max
ab ].
An example of an experimentally important distribution of this kind is the “mqll
threshold” – which is the distribution of the combined invariant mass of the visible
standard model particles radiated from the hypothesised decay of a squark to
the lightest neutralino via successive two body decay,: ˜q → q ˜ 02
→ ql˜l → qll ˜ 01
, in
which the experimenter requires additionally that mll be greater than mmax
ll /√2.
The location of the “foot” of this distribution is often used to constrain sparticle
mass scales. The new results presented here permit the location of this foot to be
better understood as the shape of the distribution is derived. The effects of varying
the position of the mll cut(s) may now be seen more easily
Dual Mutation Events in the Haemagglutinin-Esterase and Fusion Protein from an Infectious Salmon Anaemia Virus HPR0 Genotype Promote Viral Fusion and Activation by an Ubiquitous Host Protease
Funding: The Scottish Government funded this work, as part of their global budget on aquaculture research. The funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.Peer reviewedPublisher PD
An adaptive multi-level simulation algorithm for stochastic biological systems
Discrete-state, continuous-time Markov models are widely used in the modeling
of biochemical reaction networks. Their complexity often precludes analytic
solution, and we rely on stochastic simulation algorithms to estimate system
statistics. The Gillespie algorithm is exact, but computationally costly as it
simulates every single reaction. As such, approximate stochastic simulation
algorithms such as the tau-leap algorithm are often used. Potentially
computationally more efficient, the system statistics generated suffer from
significant bias unless tau is relatively small, in which case the
computational time can be comparable to that of the Gillespie algorithm. The
multi-level method (Anderson and Higham, Multiscale Model. Simul. 2012) tackles
this problem. A base estimator is computed using many (cheap) sample paths at
low accuracy. The bias inherent in this estimator is then reduced using a
number of corrections. Each correction term is estimated using a collection of
paired sample paths where one path of each pair is generated at a higher
accuracy compared to the other (and so more expensive). By sharing random
variables between these paired paths the variance of each correction estimator
can be reduced. This renders the multi-level method very efficient as only a
relatively small number of paired paths are required to calculate each
correction term. In the original multi-level method, each sample path is
simulated using the tau-leap algorithm with a fixed value of . This
approach can result in poor performance when the reaction activity of a system
changes substantially over the timescale of interest. By introducing a novel,
adaptive time-stepping approach where is chosen according to the
stochastic behaviour of each sample path we extend the applicability of the
multi-level method to such cases. We demonstrate the efficiency of our method
using a number of examples.Comment: 23 page
The Economic Value of Rebuilding Fisheries
The global demand for protein from seafood –- whether wild, caught or cultured, whether for direct consumption or as feed for livestock –- is high and projected to continue growing. However, the ocean's ability to meet this demand is uncertain due to either mismanagement or, in some cases, lack of management of marine fish stocks. Efforts to rebuild and recover the world's fisheries will benefit from an improved understanding of the long-term economic benefits of recovering collapsed stocks, the trajectory and duration of different rebuilding approaches, variation in the value and timing of recovery for fisheries with different economic, biological, and regulatory characteristics, including identifying which fisheries are likely to benefit most from recovery, and the benefits of avoiding collapse in the first place. These questions are addressed in this paper using a dynamic bioeconomic optimisation model that explicitly accounts for economics, management, and ecology of size-structured exploited fish populations. Within this model framework, different management options (effort controls on small-, medium-, and large-sized fish) including management that optimises economic returns over a specified planning horizon are simulated and the consequences compared. The results show considerable economic gains from rebuilding fisheries, with magnitudes varying across fisheries
A Storm in a "T" Cup
We revisit the process of transversification and agglomeration of particle
momenta that are often performed in analyses at hadron colliders, and show that
many of the existing mass-measurement variables proposed for hadron colliders
are far more closely related to each other than is widely appreciated, and
indeed can all be viewed as a common mass bound specialized for a variety of
purposes.Comment: 3 pages, 2 figures, presented by K.C. Kong at the 19th Particles and
Nuclei International Conference, PANIC 2011, MIT, Cambridge, MA (July 24-29,
2011
- …
