6 research outputs found

    Performance Improvements of EventIndex Distributed System at CERN

    Get PDF
    El trabajo de esta tesis se enmarca dentro del proyecto EventIndex del experimento ATLAS, un gran detector de partı́culas del LHC (Gran Colisionador de Hadrones) en el CERN. El objetivo del proyecto es catalogar todas las colisiones de partı́culas, o eventos, registrados en el detector ATLAS y también simulados a lo largo de sus años de funcionamiento. Con este catálogo se pueden caracterizar los datos a nivel de evento para su búsqueda y localización por parte de los usuarios finales. También se pueden realizar comprobaciones en la cadena de registro y reprocesado de los datos, para comprobar su corrección y optimizar futuros procesos. Debido al incremento en las tasas y volumen de datos esperados en el Run 3 (2022-2025) y el HL-LHC (finales de la década del 2020), se requiere un sistema escalable y que simplifique implementaciones anteriores. En esta tesis se presentan las contribuciones al proyecto en las áreas de recolección de datos distribuida, almacenamiento de cantidades masivas de datos y acceso a los mismos. Una pequeña cantidad de información (metadatos) por evento es indexada en el CERN (Tier-0), y de forma distribuida en el grid en todos los centros de computación que forman parte del experimento ATLAS (10 Tier-1, y del orden de 70 Tier-2). En esta tesis se presenta un nuevo modelo de recolección de datos en el grid basado en un object store como almacenamiento temporal, y con selección dinámica de datos para su ingestión en el almacén de datos final. También se presentan las contribuciones a una nueva solución en un único y gran almacén de datos basado en tecnologı́as de macrodatos (Big Data) como HBase/Phoenix, capaz de sostener las tasas y volumen de ingestión de datos requeridos, y que simplifica y soluciona los problemas de las anteriores soluciones hı́bridas. Finalmente, se presenta un marco de computación y herramientas basadas en Spark para el acceso a los datos y la resolución de cargas de trabajo analı́ticas que acceden a grandes cantidades de datos, como el cálculo del solapado (overlaps) entre eventos de distintos datasets, o el cálculo de eventos duplicados.The work presented in this thesis is framed in the context of the EventIndex project of the ATLAS experiment, a big particle detector of the LHC (Large Hadron Collider) at CERN. The objective of the project is to catalog all the particle collisions, or events, recorded at the ATLAS detector and also simulated over the duration of the experiment. With this catalog, data can be characterized at event granularity, important for searching and locating events by the end users. Other automatic checkings can be done in the data reprocessing chain, in order to assure its correctness and optimize future processings. Due to the rise in the production rates and total volume of the data expected for Run 3 (2022-2025) and the HL-LHC (end of the 2020 decade), a scalable system is required also to simplify previous implementations. In this thesis we present the contributions to the project in the areas of distributed data collection, storage of massive volumes of data and access to them. A small quantity of information (metadata) by event is collected from CERN (Tier-0), and distributedly worldwide in the grid in all the computing centers part of the ATLAS Experiment (10 Tier-1, and around 70 Tier-2). We present a new pull model for data collection in the grid with an object store as a temporary store, from where the data can be dynamically retrieved to be ingested at the final backend. We also present the contributions to a big data store using HBase/Phoenix, able to sustain the required data rates and total volume of data, and that simplifies the limitations of the previous hybrid solutions. Finally, we present a computing framework and tools using Spark for the data access, and solving the analytic use cases that access large amounts of data, such as overlaps or duplicate events detection

    Software provision process for EGI

    Get PDF
    The European Grid Initiative (EGI) provides a sustainable pan-European Grid computing infrastructure for e-Science based on a network of regional and national Grids. The middleware driving this production infrastructure is constantly adapted to the changing needs of the EGI Community by deploying new features and phasing out other features and components that are no longer needed. Unlike previous e-Infrastructure projects, EGI does not develop its own middleware solution, but instead sources the required components from Technology Providers and integrates them in the Unified Middleware Distribution (UMD). In order to guarantee a high quality and reliable operation of the infrastructure, all UMD software must undergo a release process that covers the definition of the functional, performance and quality requirements, the verification of those requirements and testing in production environments.This work is partially funded by the EGI-InSPIRE (European Grid Initiative: Integrated Sustainable Pan-European Infrastructure for Researchers in Europe) is a project co-funded by the European Commission (contract number INFSO-RI- 261323) as an Integrated Infrastructure Initiative within the 7th Framework Programme.Peer Reviewe

    Distributed Data Collection for the Next Generation ATLAS EventIndex Project

    No full text
    The ATLAS EventIndex currently runs in production in order to build a complete catalogue of events for experiments with large amounts of data. The current approach is to index all final produced data files at CERN Tier0, and at hundreds of grid sites, with a distributed data collection architecture using Object Stores to temporarily maintain the conveyed information, with references to them sent with a Messaging System. The final backend of all the indexed data is a central Hadoop infrastructure at CERN; an Oracle relational database is used for faster access to a subset of this information. In the future of ATLAS, instead of files, the event should be the atomic information unit for metadata, in order to accommodate future data processing and storage technologies. Files will no longer be static quantities, possibly dynamically aggregating data, and also allowing event-level granularity processing in heavily parallel computing environments. It also simplifies the handling of loss and or extension of data. In this sense the EventIndex may evolve towards a generalized whiteboard, with the ability to build collections and virtual datasets for end users. This proceedings describes the current Distributed Data Collection Architecture of the ATLAS EventIndex project, with details of the Producer, Consumer and Supervisor entities, and the protocol and information temporarily stored in the ObjectStore. It also shows the data flow rates and performance achieved since the new Object Store as temporary store approach was put in production in July 2017. We review the challenges imposed by the expected increasing rates that will reach 35 billion new real events per year in Run 3, and 100 billion new real events per year in Run 4. For simulated events the numbers are even higher, with 100 billion events/year in run 3, and 300 billion events/year in run 4. We also outline the challenges we face in order to accommodate future use cases in the EventIndex

    Distributed Data Collection for the Next Generation ATLAS EventIndex Project*

    Get PDF
    The ATLAS EventIndex currently runs in production in order to build a complete catalogue of events for experiments with large amounts of data. The current approach is to index all final produced data files at CERN Tier0, and at hundreds of grid sites, with a distributed data collection architecture using Object Stores to temporarily maintain the conveyed information, with references to them sent with a Messaging System. The final backend of all the indexed data is a central Hadoop infrastructure at CERN; an Oracle relational database is used for faster access to a subset of this information. In the future of ATLAS, instead of files, the event should be the atomic information unit for metadata, in order to accommodate future data processing and storage technologies. Files will no longer be static quantities, possibly dynamically aggregating data, and also allowing event-level granularity processing in heavily parallel computing environments. It also simplifies the handling of loss and or extension of data. In this sense the EventIndex may evolve towards a generalized whiteboard, with the ability to build collections and virtual datasets for end users. This proceedings describes the current Distributed Data Collection Architecture of the ATLAS EventIndex project, with details of the Producer, Consumer and Supervisor entities, and the protocol and information temporarily stored in the ObjectStore. It also shows the data flow rates and performance achieved since the new Object Store as temporary store approach was put in production in July 2017. We review the challenges imposed by the expected increasing rates that will reach 35 billion new real events per year in Run 3, and 100 billion new real events per year in Run 4. For simulated events the numbers are even higher, with 100 billion events/year in run 3, and 300 billion events/year in run 4. We also outline the challenges we face in order to accommodate future use cases in the EventIndex

    The ATLAS EventIndex for LHC Run 3

    No full text
    International audienceThe ATLAS EventIndex was designed in 2012-2013 to provide a global event catalogue and limited event-level metadata for ATLAS analysis groups and users during the LHC Run 2 (2015-2018). It provides a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the overlaps of trigger chains and derivation datasets. The LHC Run 3, starting in 2021, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This proceeding describes the implementation of a new core storage service that will be able to provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, with schemas derived from the current Oracle implementation, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, allowing to re-use most of the existing code for metadata integration

    The ATLAS EventIndex for LHC Run 3

    Get PDF
    The ATLAS EventIndex was designed in 2012-2013 to provide a global event catalogue and limited event-level metadata for ATLAS analysis groups and users during the LHC Run 2 (2015-2018). It provides a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the overlaps of trigger chains and derivation datasets. The LHC Run 3, starting in 2021, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This proceeding describes the implementation of a new core storage service that will be able to provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, with schemas derived from the current Oracle implementation, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, allowing to re-use most of the existing code for metadata integration
    corecore