2,494 research outputs found

    Constrained invariant mass distributions in cascade decays. The shape of the "mqllm_{qll}-threshold" and similar distributions

    Full text link
    Considering the cascade decay D→cC→cbB→cbaAD\to c C \to c b B \to c b a A in which D,C,B,AD,C,B,A are massive particles and c,b,ac,b,a are massless particles, we determine for the first time the shape of the distribution of the invariant mass of the three massless particles mabcm_{abc} for the sub-set of decays in which the invariant mass mabm_{ab} of the last two particles in the chain is (optionally) constrained to lie inside an arbitrary interval, mab∈[mabcut min,mabcut max]m_{ab} \in [ m_{ab}^\text{cut min}, m_{ab}^\text{cut max}]. An example of an experimentally important distribution of this kind is the ``mqllm_{qll} threshold'' -- which is the distribution of the combined invariant mass of the visible standard model particles radiated from the hypothesised decay of a squark to the lightest neutralino via successive two body decay,: \squark \to q \ntlinoTwo \to q l \slepton \to q l l \ntlinoOne , in which the experimenter requires additionally that mllm_{ll} be greater than mllmax/2{m_{ll}^{max}}/\sqrt{2}. The location of the ``foot'' of this distribution is often used to constrain sparticle mass scales. The new results presented here permit the location of this foot to be better understood as the shape of the distribution is derived. The effects of varying the position of the mllm_{ll} cut(s) may now be seen more easily.Comment: 12 pages, 3 figure

    Improving estimates of the number of fake leptons and other mis-reconstructed objects in hadron collider events: BoB's your UNCLE. (Previously "The Matrix Method Reloaded")

    Get PDF
    We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic matrix method for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.Comment: v1 :11 pages, 5 figures. v2: title change requested by referee, and other corrections/clarifications found during review. v3: final tweaks suggested during review + move from revtex to jhep styl

    Efficient simulation techniques for biochemical reaction networks

    Get PDF
    Discrete-state, continuous-time Markov models are becoming commonplace in the modelling of biochemical processes. The mathematical formulations that such models lead to are opaque, and, due to their complexity, are often considered analytically intractable. As such, a variety of Monte Carlo simulation algorithms have been developed to explore model dynamics empirically. Whilst well-known methods, such as the Gillespie Algorithm, can be implemented to investigate a given model, the computational demands of traditional simulation techniques remain a significant barrier to modern research. In order to further develop and explore biologically relevant stochastic models, new and efficient computational methods are required. In this thesis, high-performance simulation algorithms are developed to estimate summary statistics that characterise a chosen reaction network. The algorithms make use of variance reduction techniques, which exploit statistical properties of the model dynamics, so that the statistics can be computed efficiently. The multi-level method is an example of a variance reduction technique. The method estimates summary statistics of well-mixed, spatially homogeneous models by using estimates from multiple ensembles of sample paths of different accuracies. In this thesis, the multi-level method is developed in three directions: firstly, a nuanced implementation framework is described; secondly, a reformulated method is applied to stiff reaction systems; and, finally, different approaches to variance reduction are implemented and compared. The variance reduction methods that underpin the multi-level method are then re-purposed to understand how the dynamics of a spatially-extended Markov model are affected by changes in its input parameters. By exploiting the inherent dynamics of spatially-extended models, an efficient finite difference scheme is used to estimate parametric sensitivities robustly. The new simulation methods are tested for functionality and efficiency with a range of illustrative examples. The thesis concludes with a discussion of our findings, and a number of future research directions are proposed

    A Search for the Higgs Boson Produced in Association With Top Quarks in Multilepton Final States at Atlas

    Get PDF
    This thesis presents preliminary results of a search for Higgs boson production in association with top quarks in multilepton final states. The search was conducted in the 2012 dataset of proton-proton collisions delivered by the CERN Large Hadron Collider at a center-of-mass energy of 8 TeV and collected by the ATLAS experiment. The dataset corresponds to an integrated luminosity of 20.3 inverse femtobarns. The analysis is conducted by measuring event counts in signal regions distinguished by the number of leptons (2 same-sign, 3, and 4), jets and b-tagged jets present in the reconstructed events. The observed events in the signal regions constitute an excess over the expected number of background events. The results are evaluated using a frequentist statistical model. The observed exclusion upper limit at the 95 % confidence level is 5.50 times the predicted Standard Model production cross section for Higgs production in association with top quarks. The fitted value of the ratio of the observed production rate to the expected Standard Model production rate is 2.83 ++ 1.58 −- 1.35

    An adaptive multi-level simulation algorithm for stochastic biological systems

    Full text link
    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method (Anderson and Higham, Multiscale Model. Simul. 2012) tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ\tau. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel, adaptive time-stepping approach where τ\tau is chosen according to the stochastic behaviour of each sample path we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.Comment: 23 page

    The Economic Value of Rebuilding Fisheries

    Get PDF
    The global demand for protein from seafood –- whether wild, caught or cultured, whether for direct consumption or as feed for livestock –- is high and projected to continue growing. However, the ocean's ability to meet this demand is uncertain due to either mismanagement or, in some cases, lack of management of marine fish stocks. Efforts to rebuild and recover the world's fisheries will benefit from an improved understanding of the long-term economic benefits of recovering collapsed stocks, the trajectory and duration of different rebuilding approaches, variation in the value and timing of recovery for fisheries with different economic, biological, and regulatory characteristics, including identifying which fisheries are likely to benefit most from recovery, and the benefits of avoiding collapse in the first place. These questions are addressed in this paper using a dynamic bioeconomic optimisation model that explicitly accounts for economics, management, and ecology of size-structured exploited fish populations. Within this model framework, different management options (effort controls on small-, medium-, and large-sized fish) including management that optimises economic returns over a specified planning horizon are simulated and the consequences compared. The results show considerable economic gains from rebuilding fisheries, with magnitudes varying across fisheries
    • 

    corecore