Skip to main content
Article thumbnail
Location of Repository

The LHCb Distributed computing model and operations during LHC runs 1, 2 and 3

By Stefan Roiser, Adrian Casajus, Marco Cattaneo, Philippe Charpentier, Peter Clarke, Joel Closier, Marco Corvo, Antonio Falabella, Jose Flix Molina, Joao Victor De Franca Messias Medeiros, Ricardo Graciani Diaz, Christophe Haen, Mikhail Hushchyn, Cinzia Luzzi, Zoltan Mathe, Andrew Mcnab, Raja Nandakumar, Stefano Perazzini, Daniela Remenska, Vladimir Romanovskiy, Michail Salichos, Renato Santana, Mark Slater, Luca Tomassetti, Andrei Tsaregorodsev, Andrey Ustyuzhanin, Vincenzo Vagnoni, Aresh Vedaee and Alexey Zhelezov


LHCb is one of the four main high energy physics experiments currently in operation at the Large Hadron Collider at CERN, Switzerland. This contribution reports on the experience of the computing team during LHC Run 1, the current preparation for Run 2 and a brief outlook on plans for data taking and its implications for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface the experiment distributed computing resources for its data processing and data management operations is given. During Run 1 several changes in the online filter farms had impacts on the computing operations and the computing model such as the replication of physics data, the data processing workflows and the organisation of processing campaigns. The strict MONARC model originally foreseen for LHC distributed computing was changed. Furthermore several changes and simplifications in the tools for distributed computing were taken e.g. for the software distribution, the replica catalog service or the deployment of conditions data. The reasons, implementations and implications for all these changes will be discussed. For Run 2 the running conditions of the LHC will change which will also have an impact on the distributed computing as the output rate of the high level trigger (HLT) approximately will double. This increased load on computing resources and also changes in the high level trigger farm, which will allow a final calibration of data will have a direct impact on the computing model. In addition more simplifications in the usage of tools are foreseen for Run 2, such as the consolidation of data access protocols, the usage of a new replica catalog and several adaptions in the core the distributed computing framework to serve the additional load. In Run 3 the trigger output rate is foreseen to increase. One of the changes in HLT, to be tested during Run 2 and taken further in Run 3, which allows direct output of physics data without offline reconstruction will be discussed. LHCb also strives for the inclusion of cloud and virtualised infrastructures for its distributed computing needs, including running on IaaS infrastructures such as Openstack or on hypervisor only systems using Vac, a self organising cloud infrastructure. The usage of BOINC for volunteer computing is currently in preparation and tested. All these infrastructures, in addition to the classical grid computing, can be served by a single service and pilot system. The details of these different approaches will be discussed

Topics: Multidisciplinary
Publisher: Proceedings of Science (PoS)
Year: 2015
OAI identifier:
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.