1,923 research outputs found

    Boosting Performance of Data-intensive Analysis Workflows with Distributed Coordinated Caching

    Get PDF
    Data-intensive end-user analyses in high energy physics require high data throughput to reach short turnaround cycles. This leads to enormous challenges for storage and network infrastructure, especially when facing the tremendously increasing amount of data to be processed during High-Luminosity LHC runs. Including opportunistic resources with volatile storage systems into the traditional HEP computing facilities makes this situation more complex. Bringing data close to the computing units is a promising approach to solve throughput limitations and improve the overall performance. We focus on coordinated distributed caching by coordinating workows to the most suitable hosts in terms of cached files. This allows optimizing overall processing efficiency of data-intensive workows and efficiently use limited cache volume by reducing replication of data on distributed caches. We developed a NaviX coordination service at KIT that realizes coordinated distributed caching using XRootD cache proxy server infrastructure and HTCondor batch system. In this paper, we present the experience gained in operating coordinated distributed caches on cloud and HPC resources. Furthermore, we show benchmarks of a dedicated high throughput cluster, the Throughput-Optimized Analysis-System (TOpAS), which is based on the above-mentioned concept

    Federation of compute resources available to the German CMS community

    Get PDF
    The German CMS community (DCMS) as a whole can benefit from the various compute resources, available to its different institutes. While Grid-enabled and National Analysis Facility resources are usually shared within the community, local and recently enabled opportunistic resources like HPC centers and cloud resources are not. Furthermore, there is no shared submission infrastructure available. Via HTCondor\u27s [1] mechanisms to connect resource pools, several remote pools can be connected transparently to the users and therefore used more efficiently by a multitude of user groups. In addition to the statically provisioned resources, also dynamically allocated resources from external cloud providers as well as HPC centers can be integrated. However, the usage of such dynamically allocated resources gives rise to additional complexity. Constraints on access policies of the resources, as well as workflow necessities have to be taken care of. To maintain a well-defined and reliable runtime environment on each resource, virtualization and containerization technologies such as virtual machines, Docker, and Singularity, are used

    HEP Analyses on Dynamically Allocated Opportunistic Computing Resources

    Get PDF
    The current experiments in high energy physics (HEP) have a huge data rate. To convert the measured data, an enormous number of computing resources is needed and will further increase with upgraded and newer experiments. To fulfill the ever-growing demand the allocation of additional, potentially only temporary available non-HEP dedicated resources is important. These so-called opportunistic resources cannot only be used for analyses in general but are also well-suited to cover the typical unpredictable peak demands for computing resources. For both use cases, the temporary availability of the opportunistic resources requires a dynamic allocation, integration, and management, while their heterogeneity requires optimization to maintain high resource utilization by allocating best matching resources. To find the best matching resources which should be allocated is challenging due to the unpredictable submission behavior as well as an ever-changing mixture of workflows with different requirements. Instead of predicting the best matching resource, we base our decisions on the utilization of resources. For this reason, we are developing the resource manager TARDIS (Transparent Adaptive Resource Dynamic Integration System) which manages and dynamically requests or releases resources. The decision of how many resources TARDIS has to request is implemented in COBalD (COBald - The Opportunistic Balancing Daemon) to ensure further allocation of well-used resources while reducing the amount of insufficiently used ones. TARDIS allocates and manages resources from various resource providers such as HPC centers or commercial and public clouds while ensuring a dynamic allocation and efficient utilization of these heterogeneous opportunistic resources. Furthermore, TARDIS integrates the allocated opportunistic resources into one overlay batch system which provides a single point of entry for all users. In order to provide the dedicated HEP software environment, we use virtualization and container technologies. In this contribution, we give an overview of the dynamic integration of opportunistic resources via TARDIS/COBalD in our HEP institute as well as how user analyses benefit from these additional resources

    Controlled generation of momentum states in a high-finesse ring cavity

    Full text link
    A Bose-Einstein condensate in a high-finesse ring cavity scatters the photons of a pump beam into counterpropagating cavity modes, populating a bi-dimensional momentum lattice. A high-finesse ring cavity with a sub-recoil linewidth allows to control the quantized atomic motion, selecting particular discrete momentum states and generating atom-photon entanglement. The semiclassical and quantum model for the 2D collective atomic recoil lasing (CARL) are derived and the superradiant and good-cavity regimes discussed. For pump incidence perpendicular to the cavity axis, the momentum lattice is symmetrically populated. Conversely, for oblique pump incidence the motion along the two recoil directions is unbalanced and different momentum states can be populated on demand by tuning the pump frequency.Comment: Submitted to EPJ-ST Special Issue. 10 pages and 3 figure

    Strategies and performance of the CMS silicon tracker alignment during LHC Run 2

    Get PDF
    The strategies for and the performance of the CMS silicon tracking system alignment during the 2015–2018 data-taking period of the LHC are described. The alignment procedures during and after data taking are explained. Alignment scenarios are also derived for use in the simulation of the detector response. Systematic effects, related to intrinsic symmetries of the alignment task or to external constraints, are discussed and illustrated for different scenarios

    Measurement of Energy Correlators inside Jets and Determination of the Strong Coupling Formula Presented

    Get PDF
    Energy correlators that describe energy-weighted distances between two or three particles in a hadronic jet are measured using an event sample of s\sqrt{s}=13 TeV proton-proton collisions collected by the CMS experiment and corresponding to an integrated luminosity of 36.3 fb−1^{−1}. The measured distributions are consistent with the trends in the simulation that reveal two key features of the strong interaction: confinement and asymptotic freedom. By comparing the ratio of the measured three- and two-particle energy correlator distributions with theoretical calculations that resum collinear emissions at approximate next-to-next-to-leading-logarithmic accuracy matched to a next-to-leading-order calculation, the strong coupling is determined at the Z boson mass: αS_S (mZ_Z)=0.1229 0.0040−0.0050\frac{0.0040}{-0.0050} , the most precise αS_SmZ_Z value obtained using jet substructure observable

    Search for a vector-like quark Tâ€Č → tH via the diphoton decay mode of the Higgs boson in proton-proton collisions at s \sqrt{s} = 13 TeV

    Get PDF
    A search for the electroweak production of a vector-like quark Tâ€Č, decaying to a top quark and a Higgs boson is presented. The search is based on a sample of proton-proton collision events recorded at the LHC at = 13 TeV, corresponding to an integrated luminosity of 138 fb−1. This is the first Tâ€Č search that exploits the Higgs boson decay to a pair of photons. For narrow isospin singlet Tâ€Č states with masses up to 1.1 TeV, the excellent diphoton invariant mass resolution of 1–2% results in an increased sensitivity compared to previous searches based on the same production mechanism. The electroweak production of a Tâ€Č quark with mass up to 960 GeV is excluded at 95% confidence level, assuming a coupling strength ÎșT = 0.25 and a relative decay width Γ/MTâ€Č < 5%

    Search for heavy neutral leptons in final states with electrons, muons, and hadronically decaying tau leptons in proton-proton collisions at s \sqrt{s} = 13 TeV

    Get PDF
    A search for heavy neutral leptons (HNLs) of Majorana or Dirac type using proton-proton collision data at = 13 TeV is presented. The data were collected by the CMS experiment at the CERN LHC and correspond to an integrated luminosity of 138 fb−1. Events with three charged leptons (electrons, muons, and hadronically decaying tau leptons) are selected, corresponding to HNL production in association with a charged lepton and decay of the HNL to two charged leptons and a standard model (SM) neutrino. The search is performed for HNL masses between 10 GeV and 1.5 TeV. No evidence for an HNL signal is observed in data. Upper limits at 95% confidence level are found for the squared coupling strength of the HNL to SM neutrinos, considering exclusive coupling of the HNL to a single SM neutrino generation, for both Majorana and Dirac HNLs. The limits exceed previously achieved experimental constraints for a wide range of HNL masses, and the limits on tau neutrino coupling scenarios with HNL masses above the W boson mass are presented for the first time

    Combined searches for the production of supersymmetric top quark partners in proton–proton collisions at √s=13Te

    Get PDF
    A combination of searches for top squark pair production using proton–proton collision data at a center-of-mass energy of 13TeV at the CERN LHC, corresponding to an integrated luminosity of 137fb−1^{-1} collected by the CMS experiment, is presented. Signatures with at least 2 jets and large missing transverse momentum are categorized into events with 0, 1, or 2 leptons. New results for regions of parameter space where the kinematical properties of top squark pair production and top quark pair production are very similar are presented. Depending on the model, the combined result excludes a top squark mass up to 1325GeV for a massless neutralino, and a neutralino mass up to 700GeV for a top squark mass of 1150GeV. Top squarks with masses from 145 to 295GeV, for neutralino masses from 0 to 100GeV, with a mass difference between the top squark and the neutralino in a window of 30GeV around the mass of the top quark, are excluded for the first time with CMS data. The results of theses searches are also interpreted in an alternative signal model of dark matter production via a spin-0 mediator in association with a top quark pair. Upper limits are set on the cross section for mediator particle masses of up to 420GeV

    A search for new physics in central exclusive production using the missing mass technique with the CMS detector and the CMS-TOTEM precision proton spectrometer

    Get PDF
    A generic search is presented for the associated production of a Z boson or a photon with an additional unspecified massive particle X, pp → pp + Z/γ + X, in proton-tagged events from proton–proton collisions at √s = 13 TeV, recorded in 2017 with the CMS detector and the CMS-TOTEM precision proton spectrometer. The missing mass spectrum is analysed in the 600–1600 GeV range and a fit is performed to search for possible deviations from the background expectation. No significant excess in data with respect to the background predictions has been observed. odelindependent upper limits on the visible production cross section of pp → pp + Z/γ + X are set
    • 

    corecore