11 research outputs found

    A Measurement of B Meson Production and Lifetime Using D`− Events in Z0 Decays

    Get PDF
    A study of B meson decays into D l- X final states is presented. In these events, neutral and charged D mesons originate predominantly from B+ and B0 decays, respectively. The dilution of this correlation due to D** production has been taken into account. From 263700 hadronic Z0 decays collected in 1991 with the DELPHI detector at the LEP collider, 92 D0 --> K- pi+, 35 D+ --> K- pi+ pi+ and 61 D*+ --> D0 pi+ followed by D0 --> K- pi+ or D0 --> K- pi+ pi+ pi-, are found with an associated lepton of the same charge as the kaon. From the D0 l- and D*+ l-, the probability f(d) that a b quark hadronizes into a B- (or B0BAR),meson is found to be 0.44 +/-0.08 +/-0.09, corresponding to a total (B(s) + LAMBDA(b)) hadronization fraction of 0.12(-0.12)+0.24 .By reconstructing the energy of each B meson, the b quark fragmentation is directly measured for the first time. The mean value of the B meson energy fraction is: [X(E)(B)] = 0.695+/-0.015(stat.)+/-0.029(syst.) Reconstructing D-lepton vertices, the following B life-times are measured: tau(B) = 1.27(-0.18)+0.22(stat.)+/-0.15(syst.) ps, where bBAR --> D0 l- X, tau(B) = 1.18(-0.27)+0.39(stat.)+/-0.15(syst.) ps, where BBAR --> D+ l- X, T(B) = 1.19(-0.19)+0.25(stat.)+/-0.15(syst.) ps where BBAR --> D*+ l- X, and an average tau(B) = 1.23(-0.13)+0.14(stat.)+/-0.15(syst.) ps is found. Allowing for decays into D** l- vBAR, the B+ and B0 lifetimes are: tau(B+)= 1.30(0.29)+0.33(stat.)+/-0.15(syst. exp.) +/-0.05(syst. D**) ps, tau(B0)= 1.17(-0.23)+0.29(stat.)+/-0.15(syst. exp.) +/-0.05 (syst. D**) ps, tau(B+)/tau(B0) = 1.11(0.39)+0.51(stat.)+/-0.05(syst. exp.) +/-0.10(syst. D**) ps

    Measurement of the triple-gluon vertex from 4-JET events at LEP

    Get PDF
    From the combined data of 1990 and 1991 of the DELPHI experiment at LEP, 13057 4-jet events are obtained and used for determining the contribution of the triple-gluon vertex. The relevant variables are the generalized Nachtmann Reiter angle theta(NR)* and the opening angle of the two least energetic jets. A fit to their two-dimensional distribution yields C(A)/C(F)=2.12+/-0.35 and N(C)/N(A)=0.46+/-0.19, where C(A)/C(F) is the ratio of the coupling strength of the triple-gluon vertex to that of gluon bremsstrahlung from quarks, and N(C)/N(A), the ratio of the number of quark colours to the number of gluons. This constitutes a convincing model-independent proof of the existence of the triple-gluon vertex, since its contribution is directly proportional to C(A)/C(F). The results are in agreement with the values expected from QCD: C(A)/C(F)=2.25, and N(C)/N(A)=3/8

    The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

    No full text
    Modern scientific experiments collect vast amounts of data that must be catalogued to meet multiple use cases and search criteria. In particular, high-energy physics experiments currently in operation produce several billion events per year. A database with the references to the files including each event in every stage of processing is necessary in order to retrieve the selected events from data storage systems. The ATLAS EventIndex project is developing a way to store the necessary information using modern data storage technologies (Hadoop, HBase etc.) that allow saving in memory key-value pairs and select the best tools to support this application from the point of view of performance, robustness and ease of use. This paper describes the initial design and performance tests and the project evolution towards deployment and operation during 2014

    The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

    No full text
    For the ATLAS CollaborationInternational audienceModern scientific experiments collect vast amounts of data that must be catalogued to meet multiple use cases and search criteria. In particular, high-energy physics experiments currently in operation produce several billion events per year. A database with the references to the files including each event in every stage of processing is necessary in order to retrieve the selected events from data storage systems. The ATLAS EventIndex project is studying the best way to store the necessary information using modern data storage technologies (Hadoop, HBase etc.) that allow saving in memory key-value pairs and select the best tools to support this application from the point of view of performance, robustness and ease of use. This paper describes the initial design and performance tests and the project evolution towards deployment and operation during 2014

    Deployment and Operation of the ATLAS EventIndex for LHC Run 3

    No full text
    International audienceThe ATLAS Eventlndex is the global catalogue of all ATLAS real and simulated events. During the LHC long shutdown between Run 2 (20152018) and Run 3 (2022-2025) all its components were substantially revised and a new system was deployed for the start of Run 3 in Spring 2022. The new core storage system, based on HBase tables with a SQL interface provided by Phoenix, allows much faster data ingestion rates and scales much better than the old one to the data rates expected for the end of Run 3 and beyond. All user interfaces were also revised and a new command-line interface and web services were also deployed. The new system was initially populated with all existing data relative to Run 1 and Run 2 datasets, and then put online to receive Run 3 data in real time. After extensive testing, the old system, which ran in parallel to the new one for a few months, was finally switched off in October 2022. This paper describes the new system, the move of all existing data from the old to the new storage schemas and the operational experience gathered so far

    The ATLAS EventIndex for LHC Run 3

    Get PDF
    The ATLAS EventIndex was designed in 2012-2013 to provide a global event catalogue and limited event-level metadata for ATLAS analysis groups and users during the LHC Run 2 (2015-2018). It provides a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the overlaps of trigger chains and derivation datasets. The LHC Run 3, starting in 2021, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This proceeding describes the implementation of a new core storage service that will be able to provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, with schemas derived from the current Oracle implementation, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, allowing to re-use most of the existing code for metadata integration

    The ATLAS EventIndex for LHC Run 3

    No full text
    International audienceThe ATLAS EventIndex was designed in 2012-2013 to provide a global event catalogue and limited event-level metadata for ATLAS analysis groups and users during the LHC Run 2 (2015-2018). It provides a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the overlaps of trigger chains and derivation datasets. The LHC Run 3, starting in 2021, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This proceeding describes the implementation of a new core storage service that will be able to provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, with schemas derived from the current Oracle implementation, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, allowing to re-use most of the existing code for metadata integration

    Determination of alpha(s) using the next-to-leading-log approximation of QCD

    No full text
    A new measurement of alpha(s), is obtained from the distributions in thrust, heavy jet mass, energy-energy correlation and two recently introduced jet broadening variables following a method proposed by Catani, Trentadue, Turnock and Webber. This method includes the full calculation of O(alpha(s)2) terms and leading and next-to-leading logarithms resummed to all orders of alpha(s). The analysis is based on data taken with the DELPHI detector at LEP during 1991. I its found that the inclusion of the resummed leading and next-to-leading logarithms reduces the scale dependence of alpha(s) and allows an extension of the fit range towards the infrared limit of the kinematical range. The combined value for alpha(s) obtained at the scale mu2 = M(Z)2 is: alpha(s)(M(Z)2)= 0.123 +/- 0.006
    corecore