1,402 research outputs found

    Boosting Performance of Data-intensive Analysis Workflows with Distributed Coordinated Caching

    Get PDF
    Data-intensive end-user analyses in high energy physics require high data throughput to reach short turnaround cycles. This leads to enormous challenges for storage and network infrastructure, especially when facing the tremendously increasing amount of data to be processed during High-Luminosity LHC runs. Including opportunistic resources with volatile storage systems into the traditional HEP computing facilities makes this situation more complex. Bringing data close to the computing units is a promising approach to solve throughput limitations and improve the overall performance. We focus on coordinated distributed caching by coordinating workows to the most suitable hosts in terms of cached files. This allows optimizing overall processing efficiency of data-intensive workows and efficiently use limited cache volume by reducing replication of data on distributed caches. We developed a NaviX coordination service at KIT that realizes coordinated distributed caching using XRootD cache proxy server infrastructure and HTCondor batch system. In this paper, we present the experience gained in operating coordinated distributed caches on cloud and HPC resources. Furthermore, we show benchmarks of a dedicated high throughput cluster, the Throughput-Optimized Analysis-System (TOpAS), which is based on the above-mentioned concept

    Federation of compute resources available to the German CMS community

    Get PDF
    The German CMS community (DCMS) as a whole can benefit from the various compute resources, available to its different institutes. While Grid-enabled and National Analysis Facility resources are usually shared within the community, local and recently enabled opportunistic resources like HPC centers and cloud resources are not. Furthermore, there is no shared submission infrastructure available. Via HTCondor\u27s [1] mechanisms to connect resource pools, several remote pools can be connected transparently to the users and therefore used more efficiently by a multitude of user groups. In addition to the statically provisioned resources, also dynamically allocated resources from external cloud providers as well as HPC centers can be integrated. However, the usage of such dynamically allocated resources gives rise to additional complexity. Constraints on access policies of the resources, as well as workflow necessities have to be taken care of. To maintain a well-defined and reliable runtime environment on each resource, virtualization and containerization technologies such as virtual machines, Docker, and Singularity, are used

    HEP Analyses on Dynamically Allocated Opportunistic Computing Resources

    Get PDF
    The current experiments in high energy physics (HEP) have a huge data rate. To convert the measured data, an enormous number of computing resources is needed and will further increase with upgraded and newer experiments. To fulfill the ever-growing demand the allocation of additional, potentially only temporary available non-HEP dedicated resources is important. These so-called opportunistic resources cannot only be used for analyses in general but are also well-suited to cover the typical unpredictable peak demands for computing resources. For both use cases, the temporary availability of the opportunistic resources requires a dynamic allocation, integration, and management, while their heterogeneity requires optimization to maintain high resource utilization by allocating best matching resources. To find the best matching resources which should be allocated is challenging due to the unpredictable submission behavior as well as an ever-changing mixture of workflows with different requirements. Instead of predicting the best matching resource, we base our decisions on the utilization of resources. For this reason, we are developing the resource manager TARDIS (Transparent Adaptive Resource Dynamic Integration System) which manages and dynamically requests or releases resources. The decision of how many resources TARDIS has to request is implemented in COBalD (COBald - The Opportunistic Balancing Daemon) to ensure further allocation of well-used resources while reducing the amount of insufficiently used ones. TARDIS allocates and manages resources from various resource providers such as HPC centers or commercial and public clouds while ensuring a dynamic allocation and efficient utilization of these heterogeneous opportunistic resources. Furthermore, TARDIS integrates the allocated opportunistic resources into one overlay batch system which provides a single point of entry for all users. In order to provide the dedicated HEP software environment, we use virtualization and container technologies. In this contribution, we give an overview of the dynamic integration of opportunistic resources via TARDIS/COBalD in our HEP institute as well as how user analyses benefit from these additional resources

    Controlled generation of momentum states in a high-finesse ring cavity

    Full text link
    A Bose-Einstein condensate in a high-finesse ring cavity scatters the photons of a pump beam into counterpropagating cavity modes, populating a bi-dimensional momentum lattice. A high-finesse ring cavity with a sub-recoil linewidth allows to control the quantized atomic motion, selecting particular discrete momentum states and generating atom-photon entanglement. The semiclassical and quantum model for the 2D collective atomic recoil lasing (CARL) are derived and the superradiant and good-cavity regimes discussed. For pump incidence perpendicular to the cavity axis, the momentum lattice is symmetrically populated. Conversely, for oblique pump incidence the motion along the two recoil directions is unbalanced and different momentum states can be populated on demand by tuning the pump frequency.Comment: Submitted to EPJ-ST Special Issue. 10 pages and 3 figure

    Strategies and performance of the CMS silicon tracker alignment during LHC Run 2

    Get PDF
    The strategies for and the performance of the CMS silicon tracking system alignment during the 2015–2018 data-taking period of the LHC are described. The alignment procedures during and after data taking are explained. Alignment scenarios are also derived for use in the simulation of the detector response. Systematic effects, related to intrinsic symmetries of the alignment task or to external constraints, are discussed and illustrated for different scenarios

    Search for a vector-like quark Tâ€Č → tH via the diphoton decay mode of the Higgs boson in proton-proton collisions at s \sqrt{s} = 13 TeV

    Get PDF
    A search for the electroweak production of a vector-like quark Tâ€Č, decaying to a top quark and a Higgs boson is presented. The search is based on a sample of proton-proton collision events recorded at the LHC at = 13 TeV, corresponding to an integrated luminosity of 138 fb−1. This is the first Tâ€Č search that exploits the Higgs boson decay to a pair of photons. For narrow isospin singlet Tâ€Č states with masses up to 1.1 TeV, the excellent diphoton invariant mass resolution of 1–2% results in an increased sensitivity compared to previous searches based on the same production mechanism. The electroweak production of a Tâ€Č quark with mass up to 960 GeV is excluded at 95% confidence level, assuming a coupling strength ÎșT = 0.25 and a relative decay width Γ/MTâ€Č < 5%

    Combined searches for the production of supersymmetric top quark partners in proton–proton collisions at √s=13Te

    Get PDF
    A combination of searches for top squark pair production using proton–proton collision data at a center-of-mass energy of 13TeV at the CERN LHC, corresponding to an integrated luminosity of 137fb−1^{-1} collected by the CMS experiment, is presented. Signatures with at least 2 jets and large missing transverse momentum are categorized into events with 0, 1, or 2 leptons. New results for regions of parameter space where the kinematical properties of top squark pair production and top quark pair production are very similar are presented. Depending on the model, the combined result excludes a top squark mass up to 1325GeV for a massless neutralino, and a neutralino mass up to 700GeV for a top squark mass of 1150GeV. Top squarks with masses from 145 to 295GeV, for neutralino masses from 0 to 100GeV, with a mass difference between the top squark and the neutralino in a window of 30GeV around the mass of the top quark, are excluded for the first time with CMS data. The results of theses searches are also interpreted in an alternative signal model of dark matter production via a spin-0 mediator in association with a top quark pair. Upper limits are set on the cross section for mediator particle masses of up to 420GeV

    A search for new physics in central exclusive production using the missing mass technique with the CMS detector and the CMS-TOTEM precision proton spectrometer

    Get PDF
    A generic search is presented for the associated production of a Z boson or a photon with an additional unspecified massive particle X, pp → pp + Z/γ + X, in proton-tagged events from proton–proton collisions at √s = 13 TeV, recorded in 2017 with the CMS detector and the CMS-TOTEM precision proton spectrometer. The missing mass spectrum is analysed in the 600–1600 GeV range and a fit is performed to search for possible deviations from the background expectation. No significant excess in data with respect to the background predictions has been observed. odelindependent upper limits on the visible production cross section of pp → pp + Z/γ + X are set

    Precision luminosity measurement in proton–proton collisions at √s=13TeV in 2015 and 2016 at CMS

    Get PDF
    The measurement of the luminosity recorded by the CMS detector installed at LHC interaction point 5, using proton–proton collisions at √s=13TeV in 2015 and 2016, is reported. The absolute luminosity scale is measured for individual bunch crossings using beam-separation scans (the van der Meer method), with a relative precision of 1.3 and 1.0% in 2015 and 2016, respectively. The dominant sources of uncertainty are related to residual differences between the measured beam positions and the ones provided by the operational settings of the LHC magnets, the factorizability of the proton bunch spatial density functions in the coordinates transverse to the beam direction, and the modeling of the effect of electromagnetic interactions among protons in the colliding bunches. When applying the van der Meer calibration to the entire run periods, the integrated luminosities when CMS was fully operational are 2.27 and 36.3 fb−1^{-1} in 2015 and 2016, with a relative precision of 1.6 and 1.2%, respectively. These are among the most precise luminosity measurements at bunched-beam hadron colliders

    Search for high-mass exclusive γγ → WW and γγ → ZZ production in proton-proton collisions at s \sqrt{s} = 13 TeV

    Get PDF
    • 

    corecore