7 research outputs found

    Smooth Migration of CERN Post Mortem Service to a Horizontally Scalable Service

    No full text
    The Post Mortem service for CERNs accelerator complex stores and analyses transient data recordings of various equipment systems following certain events, like a beam dump or magnet quenches. The main purpose of this framework is to provide fast and reliable diagnostic to the equipment experts and operation crews to decide whether accelerator operation can continue safely or whether an intervention is required. While the Post Mortem System was initially designed to serve CERNs Large Hadron Collider (LHC), the scope has been rapidly extended to include as well External Post Operational Checks and Injection Quality Checks in the LHC and its injector complex. These new use cases impose more stringent time-constraints on the storage and analysis of data, calling to migrate the system towards better scalability in terms of storage capacity as well as I/O throughput. This paper presents an overview on the current service, the ongoing investigations and plans towards a scalable data storage solution and API, as well as the proposed strategy to ensure an entirely smooth transition for the current Post Mortem users

    Second Generation LHC Analysis Framework: Workload-based and User-oriented Solution

    No full text
    Consolidation and upgrades of accelerator equipment during the first long LHC shutdown period enabled particle collisions at energy levels almost twice higher compared to the first operational phase. Consequently, the software infrastructure providing vital information for machine operation and its optimisation needs to be updated to keep up with the challenges imposed by the increasing amount of collected data and the complexity of analysis. Current tools, designed more than a decade ago, have proven their reliability by significantly outperforming initially provisioned workloads, but are unable to scale efficiently to satisfy the growing needs of operators and hardware experts. In this paper we present our progress towards the development of a new workload-driven solution for LHC transient data analysis, based on identified user requirements. An initial setup and study of modern data storage and processing engines appropriate for the accelerator data analysis was conducted. First simulations of the proposed novel partitioning and replication approach, targeting a highly efficient service for heterogeneous analysis requests, were designed and performed

    First Operational Experience of DSL Based Analysis Modules for LHC Hardware Commissioning

    No full text
    The Large Hadron Collider powering systems have been tested and commissioned before to start the second run of physics production. This commissioning used for the first time analysis modules defined directly by system experts in an english-like domain specific language. In these modules, the experts defined assertions that the data generated by the powering tests must verify in order for the test to pass. These modules concerned 4 tests executed for more than 1000 systems. They allowed experts to identify issues that were hidden behind the repetitive manual analysis performed during the previous campaigns. This paper describes this first operational experience of the analysis modules, as well as the replay of all the previous campaign with them. It will also present a critical point of view on these modules to identify their drawbacks and the next step to improve this system

    Towards a Second Generation Data Analysis Framework for LHC Transient Data Recording

    No full text
    During the last two years, CERNs Large Hadron Collider (LHC) and most of its equipment systems were upgraded to collide particles at an energy level twice higher compared to the first operational period between 2010 and 2013. System upgrades and the increased machine energy represent new challenges for the analysis of transient data recordings, which have to be both dependable and fast. With the LHC having operated for many years already, statistical and trend analysis across the collected data sets is a growing requirement, highlighting several constraints and limitations imposed by the current software and data storage ecosystem. Based on several analysis use-cases, this paper highlights the most important aspects and ideas towards an improved, second generation data analysis framework to serve a large variety of equipment experts and operation crews in their daily work

    International Nosocomial Infection Control Consortiu (INICC) report, data summary of 43 countries for 2007-2012. Device-associated module

    No full text
    We report the results of an International Nosocomial Infection Control Consortium (INICC) surveillance study from January 2007-December 2012 in 503 intensive care units (ICUs) in Latin America, Asia, Africa, and Europe. During the 6-year study using the Centers for Disease Control and Prevention's (CDC) U.S. National Healthcare Safety Network (NHSN) definitions for device-associated health care–associated infection (DA-HAI), we collected prospective data from 605,310 patients hospitalized in the INICC's ICUs for an aggregate of 3,338,396 days. Although device utilization in the INICC's ICUs was similar to that reported from ICUs in the U.S. in the CDC's NHSN, rates of device-associated nosocomial infection were higher in the ICUs of the INICC hospitals: the pooled rate of central line–associated bloodstream infection in the INICC's ICUs, 4.9 per 1,000 central line days, is nearly 5-fold higher than the 0.9 per 1,000 central line days reported from comparable U.S. ICUs. The overall rate of ventilator-associated pneumonia was also higher (16.8 vs 1.1 per 1,000 ventilator days) as was the rate of catheter-associated urinary tract infection (5.5 vs 1.3 per 1,000 catheter days). Frequencies of resistance of Pseudomonas isolates to amikacin (42.8% vs 10%) and imipenem (42.4% vs 26.1%) and Klebsiella pneumoniae isolates to ceftazidime (71.2% vs 28.8%) and imipenem (19.6% vs 12.8%) were also higher in the INICC's ICUs compared with the ICUs of the CDC's NHSN
    corecore