3,543 research outputs found

    Data-driven estimations of Standard Model backgrounds to SUSY searches in ATLAS

    Get PDF
    At the Large Hadron Collider (LHC), the strategy for the observation of supersymmetry in the early days is mainly based on inclusive searches. Major backgrounds are constituted by mismeasured multi-jet events andW, Z and t quark production in association with jets.We describe recent work performed in the ATLAS Collaboration to derive these backgrounds from the first ATLAS data

    New Developments in Data-driven Background Determinations for SUSY Searches in ATLAS

    Get PDF
    Any discovery of new physics relies on detailed understanding of the Standard Model background. At the LHC, we expect to extract the backgrounds from the data itself, with minimum reliance on Monte Carlo simulations. We describe new developments in ATLAS on such data-driven techniques, and prospects for their application on first data

    Data-driven estimations of Standard Model backgrounds to SUSY searches

    Get PDF
    Mismeasured multi-jet events and W, Z and top quark production in association with jets constitute a major background to searches for supersymmetry at the LHC. We describe recent work performed in the ATLAS Collaboration to estimate these backgrounds for a basic SUSY selection, and we discuss methods to derive them from the first ATLAS data

    The ATLAS distributed analysis system

    Get PDF
    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG),the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140, 000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving;Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described

    Performance of the ATLAS Muon Drift-Tube Chambers at High Background Rates and in Magnetic Fields

    Full text link
    The ATLAS muon spectrometer uses drift-tube chambers for precision tracking. The performance of these chambers in the presence of magnetic field and high radiation fluxes is studied in this article using test-beam data recorded in the Gamma Irradiation Facility at CERN. The measurements are compared to detailed predictions provided by the Garfield drift-chamber simulation programme

    Development of Muon Drift-Tube Detectors for High-Luminosity Upgrades of the Large Hadron Collider

    Full text link
    The muon detectors of the experiments at the Large Hadron Collider (LHC) have to cope with unprecedentedly high neutron and gamma ray background rates. In the forward regions of the muon spectrometer of the ATLAS detector, for instance, counting rates of 1.7 kHz/square cm are reached at the LHC design luminosity. For high-luminosity upgrades of the LHC, up to 10 times higher background rates are expected which require replacement of the muon chambers in the critical detector regions. Tests at the CERN Gamma Irradiation Facility showed that drift-tube detectors with 15 mm diameter aluminum tubes operated with Ar:CO2 (93:7) gas at 3 bar and a maximum drift time of about 200 ns provide efficient and high-resolution muon tracking up to the highest expected rates. For 15 mm tube diameter, space charge effects deteriorating the spatial resolution at high rates are strongly suppressed. The sense wires have to be positioned in the chamber with an accuracy of better than 50 ?micons in order to achieve the desired spatial resolution of a chamber of 50 ?microns up to the highest rates. We report about the design, construction and test of prototype detectors which fulfill these requirements

    The CMS monitoring infrastructure and applications

    Full text link
    The globally distributed computing infrastructure required to cope with the multi-petabytes datasets produced by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN comprises several subsystems, such as workload management, data management, data transfers, and submission of users' and centrally managed production requests. The performance and status of all subsystems must be constantly monitored to guarantee the efficient operation of the whole infrastructure. Moreover, key metrics need to be tracked to evaluate and study the system performance over time. The CMS monitoring architecture allows both real-time and historical monitoring of a variety of data sources and is based on scalable and open source solutions tailored to satisfy the experiment's monitoring needs. We present the monitoring data flow and software architecture for the CMS distributed computing applications. We discuss the challenges, components, current achievements, and future developments of the CMS monitoring infrastructure.Comment: 14 pages, 5 figures, submitted to Computing and Software for Big Science, see https://www.springer.com/journal/4178

    Contribution to the development of the LHCb acquisition electronics and study of polarized radiative [lambda]b decays

    Get PDF
    LHCb is one of the four main experiments that will take place at the future Large Hadron Collider at CERN. The data taking is foreseen to start in 2007. The LHCb detector is a forward single-arm spectrometer dedicated to precision measurements of CP violation and rare decays in the b-quark sector. The goal is to over-constrain the Standard Model (SM) and – hopefully – to exhibit inconsistencies which will be a signal of new physics beyond. Building such a large experiment as LHCb is a big challenge, and many contributions are needed. The Lausanne institute is responsible for the development of a common "off-detector" readout board (TELL1), which provides the interface to the copper and optical links used for the detector readout, and outputs them to the data acquisition system, after performing intensive processing. It performs: event synchronization, pedestal calculation and subtraction, common mode subtraction and monitoring, zero suppression. The TELL1 board will be used by the majority of the LHCb subdetectors. We present here a contribution to the R&D necessary for the realization of the final board. In particular the feasibility of a mixed architecture using DSP and FPGA technologies has been studied. We show that the performance of this architecture satisfies LHCb electronics requirements at the time of the study (2002). Within the rich LHCb physics program, b → sγ transitions represent an interesting sector to look for evidence of physics beyond the SM. Even if the measured decay rate is in good agreement with the SM prediction up to now, new physics may still be hidden in more subtle observables. One of the most promising is the polarization of the emitted photon, which is predicted to be mainly left-handed in the SM. However right-handed components are present in a variety of new physics models. The photon polarization can be tested at LHCb by exploiting decays of polarized b baryons. If the initial baryon is polarized, asymmetries appear in the final states angular distributions, which can be used to probe the chirality of the effective Hamiltonian, and possibly to unveil new sources of CP violation. We present a phenomenological approach to the study of radiative decays of the type Λb → Λ(X)γ, where Λ(X) can be any Λ baryon of mass X. Calculations of the angular distributions are carried out employing the helicity formalism, for decays which involve Λ baryons of spin 1/2 and 3/2. Finally, detailed simulation studies of these channels in the LHCb environment allow us to assess the LHCb sensitivity to the photon polarization in b → s transitions

    Photon polarization from helicity suppression in radiative decays of polarized Lambda_b to spin-3/2 baryons

    Full text link
    We give a general parameterization of the Lambda_b --> Lambda(1520) gamma decay amplitude, applicable to any strange isosinglet spin-3/2 baryon, and calculate the branching fraction and helicity amplitudes. Large-energy form factor relations are worked out, and it is shown that the helicity-3/2 amplitudes vanish at lowest order in soft-collinear effective theory (SCET). The suppression can be tested experimentally at the LHC and elsewhere, thus providing a benchmark for SCET. We apply the results to assess the experimental reach for a possible wrong-helicity b --> s gamma dipole coupling in Lambda_b --> Lambda(1520) gamma --> p K gamma decays. Furthermore we revisit Lambda_b-polarization at hadron colliders and update the prediction from heavy-quark effective theory. Opportunities associated with b --> d gamma afforded by high-statistics Lambda_b samples are briefly discussed in the general context of CP and flavour violation.Comment: elsart, 15 pages, 1 figure; final version as published in Phys. Lett.
    • …
    corecore