6,832 research outputs found

    Event Data Definition in LHCb

    Full text link
    We present the approach used for defining the event object model for the LHCb experiment. This approach is based on a high level modelling language, which is independent of the programming language used in the current implementation of the event data processing software. The different possibilities of object modelling languages are evaluated, and the advantages of a dedicated model based on XML over other possible candidates are shown. After a description of the language itself, we explain the benefits obtained by applying this approach in the description of the event model of an experiment such as LHCb. Examples of these benefits are uniform and coherent mapping of the object model to the implementation language across the experiment software development teams, easy maintenance of the event model, conformance to experiment coding rules, etc. The description of the object model is parsed by means of a so called front-end which allows to feed several back-ends. We give an introduction to the model itself and to the currently implemented back-ends which produce information like programming language specific implementations of event objects or meta information about these objects. Meta information can be used for introspection of objects at run-time which is essential for functionalities like object persistency or interactive analysis. This object introspection package for C++ has been adopted by the LCG project as the starting point for the LCG object dictionary that is going to be developed in common for the LHC experiments. The current status of the event object modelling and its usage in LHCb are presented and the prospects of further developments are discussed.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, LaTeX, 2 eps figures. PSN MOJT00

    ReDecay: A novel approach to speed up the simulation at LHCb

    Full text link
    With the steady increase in the precision of flavour physics measurements collected during LHC Run 2, the LHCb experiment requires simulated data samples of larger and larger sizes to study the detector response in detail. The simulation of the detector response is the main contribution to the time needed to simulate full events. This time scales linearly with the particle multiplicity. Of the dozens of particles present in the simulation only the few participating in the signal decay under study are of interest, while all remaining particles mainly affect the resolutions and efficiencies of the detector. This paper presents a novel development for the LHCb simulation software which re-uses the rest of the event from previously simulated events. This approach achieves an order of magnitude increase in speed and the same quality compared to the nominal simulation

    Polish grid infrastructure for science and research

    Full text link
    Structure, functionality, parameters and organization of the computing Grid in Poland is described, mainly from the perspective of high-energy particle physics community, currently its largest consumer and developer. It represents distributed Tier-2 in the worldwide Grid infrastructure. It also provides services and resources for data-intensive applications in other sciences.Comment: Proceeedings of IEEE Eurocon 2007, Warsaw, Poland, 9-12 Sep. 2007, p.44

    LHCb trigger streams optimization

    Full text link
    The LHCb experiment stores around 101110^{11} collision events per year. A typical physics analysis deals with a final sample of up to 10710^7 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.Comment: Submitted to CHEP-2016 proceeding

    The LHCb prompt charm triggers

    Full text link
    The LHCb experiment has fully reconstructed close to 10^9 charm hadron decays---by far the world's largest sample. During the 2011-2012 running periods, the effective proton-proton beam crossing rate was 11-15 MHz while the rate at which events were written to permanent storage was 3-5 kHz. Prompt charm candidates (produced at the primary interaction vertex) were selected using a combination of exclusive and inclusive high level (software) triggers in conjunction with low level hardware triggers. The efficiencies, background rates, and possible biases of the triggers as they were implemented will be discussed, along with plans for the running at 13 TeV in 2015 and subsequently in the upgrade era.Comment: To appear in the proceedings of The 6th International Workshop on Charm Physics (CHARM 2013

    ScotGrid: Providing an Effective Distributed Tier-2 in the LHC Era

    Get PDF
    ScotGrid is a distributed Tier-2 centre in the UK with sites in Durham, Edinburgh and Glasgow. ScotGrid has undergone a huge expansion in hardware in anticipation of the LHC and now provides more than 4MSI2K and 500TB to the LHC VOs. Scaling up to this level of provision has brought many challenges to the Tier-2 and we show in this paper how we have adopted new methods of organising the centres, from fabric management and monitoring to remote management of sites to management and operational procedures, to meet these challenges. We describe how we have coped with different operational models at the sites, where Glagsow and Durham sites are managed "in house" but resources at Edinburgh are managed as a central university resource. This required the adoption of a different fabric management model at Edinburgh and a special engagement with the cluster managers. Challenges arose from the different job models of local and grid submission that required special attention to resolve. We show how ScotGrid has successfully provided an infrastructure for ATLAS and LHCb Monte Carlo production. Special attention has been paid to ensuring that user analysis functions efficiently, which has required optimisation of local storage and networking to cope with the demands of user analysis. Finally, although these Tier-2 resources are pledged to the whole VO, we have established close links with our local physics user communities as being the best way to ensure that the Tier-2 functions effectively as a part of the LHC grid computing framework..Comment: Preprint for 17th International Conference on Computing in High Energy and Nuclear Physics, 7 pages, 1 figur

    Event Index - an LHCb Event Search System

    Full text link
    During LHC Run 1, the LHCb experiment recorded around 101110^{11} collision events. This paper describes Event Index - an event search system. Its primary function is to quickly select subsets of events from a combination of conditions, such as the estimated decay channel or number of hits in a subdetector. Event Index is essentially Apache Lucene optimized for read-only indexes distributed over independent shards on independent nodes.Comment: Report for the proceedings of the CHEP-2015 conferenc
    • …
    corecore