528,810 research outputs found

    LCG MCDB -- a Knowledgebase of Monte Carlo Simulated Events

    Get PDF
    In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project

    An analysis of internal/external event ordering strategies for COTS distributed simulation

    Get PDF
    Distributed simulation is a technique that is used to link together several models so that they can work together (or interoperate) as a single model. The High Level Architecture (HLA) (IEEE 1516.2000) is the de facto standard that defines the technology for this interoperation. The creation of a distributed simulation of models developed in COTS Simulation Packages (CSPs) is of interest. The motivation is to attempt to reduce lead times of simulation projects by reusing models that have already been developed. This paper discusses one of the issues involved in distributed simulation with CSPs. This is the issue of synchronising data sent between models with the simulation of a model by a CSP, the so-called external/internal event ordering problem. The motivation is that the particular algorithm employed can represent a significant overhead on performance

    Probing the CP-Violation effects in the hττh\tau\tau coupling at the LHC

    Full text link
    A new method used to calculate the neutrino for all major tau hadronic decay event by event at the LHC is presented. It is possible because nowadays better detector description is available. With the neutrino fully reconstructed, matrix element for each event can be calculated, the mass of the Higgs particle can also be calculated event by event with high precision. Based on these, the prospect of measuring the Higgs CP mixing angle with h→ττh\to\tau\tau decays at the LHC is analyzed. It is predicted that, with a detailed detector simulation, with 3 ab−1^{-1} of data at s=13\sqrt{s}=13 TeV, a significant improvement of the measurement of the CP mixing angle to a precision of 5.2∘5.2^\circ can be achieved at the LHC, which outperforms the sensitivity from lepton EDM searches up to date in the hττh\tau\tau coupling.Comment: 8 figures, 1 table; v2: more refs, adds more discussions, matches to the published versio

    Probabilistic Reachability Analysis for Large Scale Stochastic Hybrid Systems

    Get PDF
    This paper studies probabilistic reachability analysis for large scale stochastic hybrid systems (SHS) as a problem of rare event estimation. In literature, advanced rare event estimation theory has recently been embedded within a stochastic analysis framework, and this has led to significant novel results in rare event estimation for a diffusion process using sequential MC simulation. This paper presents this rare event estimation theory directly in terms of probabilistic reachability analysis of an SHS, and develops novel theory which allows to extend the novel results for application to a large scale SHS where a very huge number of rare discrete modes may contribute significantly to the reach probability. Essentially, the approach taken is to introduce an aggregation of the discrete modes, and to develop importance sampling relative to the rare switching between the aggregation modes. The practical working of this approach is demonstrated for the safety verification of an advanced air traffic control example

    Early appraisal of the fixation probability in directed networks

    Get PDF
    In evolutionary dynamics, the probability that a mutation spreads through the whole population, having arisen in a single individual, is known as the fixation probability. In general, it is not possible to find the fixation probability analytically given the mutant's fitness and the topological constraints that govern the spread of the mutation, so one resorts to simulations instead. Depending on the topology in use, a great number of evolutionary steps may be needed in each of the simulation events, particularly in those that end with the population containing mutants only. We introduce two techniques to accelerate the determination of the fixation probability. The first one skips all evolutionary steps in which the number of mutants does not change and thereby reduces the number of steps per simulation event considerably. This technique is computationally advantageous for some of the so-called layered networks. The second technique, which is not restricted to layered networks, consists of aborting any simulation event in which the number of mutants has grown beyond a certain threshold value, and counting that event as having led to a total spread of the mutation. For large populations, and regardless of the network's topology, we demonstrate, both analytically and by means of simulations, that using a threshold of about 100 mutants leads to an estimate of the fixation probability that deviates in no significant way from that obtained from the full-fledged simulations. We have observed speedups of two orders of magnitude for layered networks with 10000 nodes

    PRISM: a tool for automatic verification of probabilistic systems

    Get PDF
    Probabilistic model checking is an automatic formal verification technique for analysing quantitative properties of systems which exhibit stochastic behaviour. PRISM is a probabilistic model checking tool which has already been successfully deployed in a wide range of application domains, from real-time communication protocols to biological signalling pathways. The tool has recently undergone a significant amount of development. Major additions include facilities to manually explore models, Monte-Carlo discrete-event simulation techniques for approximate model analysis (including support for distributed simulation) and the ability to compute cost- and reward-based measures, e.g. "the expected energy consumption of the system before the first failure occurs". This paper presents an overview of all the main features of PRISM. More information can be found on the website: www.cs.bham.ac.uk/~dxp/prism
    • …
    corecore