10 research outputs found
CRIC: a unified information system for WLCG and beyond
The Worldwide LHC Computing Grid (WLCG) is an innovative distributed environment which is deployed through the use of grid computing technologiesin order to provide computing and storage resources to the LHC experimentsfor data processing and physics analysis. Following increasing demands of LHC computing needs toward high luminosity era, the experiments are engagdin an ambitious program to extend the capability of WLCG distributed environment, for instance including opportunistically used resources such as High-Performance Computers (HPCs), cloud platforms and volunteer computer. norder to be effectively used by the LHC experiments, all these diverse distributed resources should be described in detail. This implies easy service discovery of shared physical resources, detailed description of service configurations and experiment-specific data structures is needed. In this contribution, we present a high-level information component of a distributed computing environment, the Computing Resource Information Catalogue (CRIC) which aims to facilitate distributed computing operations for the LHC experiments and consolidate WLCG topology information. In addition, CRIC performs data validation and provides coherent view and topology descriptinto the LHC VOs for service discovery and configuration. CRIC represents teevolution of ATLAS Grid Information System (AGIS) into the common experiment independent high-level information framework. CRICâs mission is to serve not just ATLAS Collaboration needs for the description of the distributed environment but any other virtual organization relying on large scale distributed infrastructure as well as the WLCG on the global scope. The contribution describes CRIC architecture, implementation of data model,collectors, user interfaces, advanced authentication and access control components of the system
CRIC: a unified information system for WLCG and beyond
The Worldwide LHC Computing Grid (WLCG) is an innovative distributed environment which is deployed through the use of grid computing technologies in order to provide computing and storage resources to the LHC experiments for data processing and physics analysis. Following increasing demands of LHC computing needs toward high luminosity era, the experiments are engaged in an ambitious program to extend the capability of WLCG distributed environment, for instance including opportunistically used resources such as High-Performance Computers (HPCs), cloud platforms and volunteer computers. In order to be effectively used by the LHC experiments, all these diverse distributed resources should be described in detail. This implies easy service discovery of shared physical resources, detailed description of service configurations and experiment-specific data structures is needed. In this contribution, we present a high-level information component of a distributed computing environment, the Computing Resource Information Catalogue (CRIC) which aims to facilitate distributed computing operations for the LHC experiments and consolidate WLCG topology information. In addition, CRIC performs data validation and provides coherent view and topology description to the LHC VOs for service discovery and configuration. CRIC represents the evolution of ATLAS Grid Information System (AGIS) into the common experiment independent high-level information framework. CRICâs mission is to serve not just ATLAS Collaboration needs for the description of the distributed environment but any other virtual organization relying on large scale distributed infrastructure as well as the WLCG on the global scope. The contribution describes CRIC architecture, implementation of data models, collectors, user interfaces, advanced authentication and access control components of the system
Computing activities at the Spanish Tier-1 and Tier-2s for the ATLAS experiment towards the LHC Run3 and High-Luminosity periods
The ATLAS Spanish Tier-1 and Tier-2s have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities and developing new computing models needed for the Run3 and HighLuminosity LHC periods. In this contribution, we present details on the integration of new components, such as High Performance Computing resources to execute ATLAS simulation workflows. The development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations is shown in this document. Improvements in data organization, management and access through storage consolidations (âdata-lakesâ), the use of data caches, and improving experiment data catalogs, like Event Index, are explained in this proceeding. The design and deployment of new analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented. Tier-1 and Tier-2 sites, are, and will be, contributing to significant R&D in computing, evaluating different models for improving performance of computing and data storage capacity in the High-Luminosity LHC era
Computing activities at the Spanish Tier-1 and Tier-2s for the ATLAS experiment towards the LHC Run3 and High-Luminosity periods
The ATLAS Spanish Tier-1 and Tier-2s have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities and developing new computing models needed for the Run3 and HighLuminosity LHC periods. In this contribution, we present details on the integration of new components, such as High Performance Computing resources to execute ATLAS simulation workflows. The development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations is shown in this document. Improvements in data organization, management and access through storage consolidations (âdata-lakesâ), the use of data caches, and improving experiment data catalogs, like Event Index, are explained in this proceeding. The design and deployment of new analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented. Tier-1 and Tier-2 sites, are, and will be, contributing to significant R&D in computing, evaluating different models for improving performance of computing and data storage capacity in the High-Luminosity LHC era
The LHCb Distributed computing model and operations during LHC runs 1, 2 and 3
LHCb is one of the four main high energy physics experiments currently in operation at the Large Hadron Collider at CERN, Switzerland. This contribution reports on the experience of the computing team during LHC Run 1, the current preparation for Run 2 and a brief outlook on plans for data taking and its implications for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface the experiment distributed computing resources for its data processing and data management operations is given. During Run 1 several changes in the online filter farms had impacts on the computing operations and the computing model such as the replication of physics data, the data processing workflows and the organisation of processing campaigns. The strict MONARC model originally foreseen for LHC distributed computing was changed. Furthermore several changes and simplifications in the tools for distributed computing were taken e.g. for the software distribution, the replica catalog service or the deployment of conditions data. The reasons, implementations and implications for all these changes will be discussed. For Run 2 the running conditions of the LHC will change which will also have an impact on the distributed computing as the output rate of the high level trigger (HLT) approximately will double. This increased load on computing resources and also changes in the high level trigger farm, which will allow a final calibration of data will have a direct impact on the computing model. In addition more simplifications in the usage of tools are foreseen for Run 2, such as the consolidation of data access protocols, the usage of a new replica catalog and several adaptions in the core the distributed computing framework to serve the additional load. In Run 3 the trigger output rate is foreseen to increase. One of the changes in HLT, to be tested during Run 2 and taken further in Run 3, which allows direct output of physics data without offline reconstruction will be discussed. LHCb also strives for the inclusion of cloud and virtualised infrastructures for its distributed computing needs, including running on IaaS infrastructures such as Openstack or on hypervisor only systems using Vac, a self organising cloud infrastructure. The usage of BOINC for volunteer computing is currently in preparation and tested. All these infrastructures, in addition to the classical grid computing, can be served by a single service and pilot system. The details of these different approaches will be discussed
Spanish ATLAS Tier-1 & Tier-2 perspective on computing over the next years
Since the beginning of the WLCG Project the Spanish ATLAS computing centers have participated with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 4-5%. In 2016 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimizing the federation of three sites located in Barcelona, Madrid and Valencia, considering that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments we are involved in, like the Event Index project, as well as the use of opportunistic resources will be useful to reach our goal. We discuss the foreseen/proposed scenario towards a sustainable computing environment for the Spanish ATLAS community in the HL-LHC period
Spanish ATLAS Tier-1 &Tier-2 perspective on computing over the next years
Since the beginning of the WLCG Project the Spanish ATLAS computer centres have contributed with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 5%, even though the Spanish contribution to the ATLAS detector construction as well as the number of authors are both close to 3%. In 2015 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimising the federation of three sites located in Barcelona, Madrid and Valencia, taking into account that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the Tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments we are involved in, like the Event Index and Event WitheBoard projects, as well as the use of opportunistic resources will be useful to reach our goal. We discuss the foreseen/proposed scenario towards a sustainable computing environment for the Spanish ATLAS community in the HL-LHC period
Spanish ATLAS Tier-1 & Tier-2 perspective on computing over the next years
Since the beginning of the WLCG Project the Spanish ATLAS computing centers have participated with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 4-5%. In 2016 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimizing the federation of three sites located in Barcelona, Madrid and Valencia, considering that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments we are involved in, like the Event Index project, as well as the use of opportunistic resources will be useful to reach our goal. We discuss the foreseen/proposed scenario towards a sustainable computing environment for the Spanish ATLAS community in the HL-LHC period
Computing Activities at the Spanish Tier-1 and Tier-2s for the ATLAS experiment towards the LHC Run3 and High Luminosity (HL-LHC periods)
The ATLAS Spanish Tier-1 and Tier-2s have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities developing the new computing models needed in the LHC Run3 and HL-LHC periods. In this contribution, we present details on the integration of new components, such as HPC computing resources to execute ATLAS simulation workflows; the development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations; and improvements in Data Organization, Management and Access through storage consolidations ("data-lakes"), the use of data Caches, and improving experiment data catalogues, like Event Index. The design and deployment of novel analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented. ATLAS Tier-1 and Tier-2 sites in Spain are, and will be, contributing to significant R&D in computing, evaluating different models for improving performance of computing and data storage capacity in the LHC High Luminosity era
Distributed Analysis in CMS
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities