10 research outputs found

    Real-time Java to support the device property model

    No full text
    Today's front-end controllers, which are widely used in CERNs controls environment, feature CPUs with high clock frequencies and extensive memory storage. Their specifications are comparable to low-end servers, or even smartphones. The Java Virtual Machine (JVM) has been running on similar configurations for years now and it seems natural to evaluate the behaviour of JVMs on this environment to characterize if Firm or Soft real-time constraints can be addressed efficiently. Using Java at this low-level offers the opportunity to refactor CERNs current implementation of the device/property model and to evolve from a monolithic architecture to a promising and scalable separation of the area of concerns, where the front-end may publish raw data that other layers would decode and re-publish. This paper presents first the evaluation of Machine Protection control system requirements in terms of real-time constraints and a comparison of the respective performance of different JVMs. In a second part, we will detail the efforts towards a first prototype of a minimal RT Java supervision layer to provide access to the hardware layer

    Evaluation of an SFP Based Test Loop for a Future Upgrade of the Optical Transmission for CERN’s Beam Interlock System

    No full text
    The Beam Interlock System (BIS) is the backbone of CERN’s machine protection system. The BIS is responsible for relaying the so-called Beam Permit signal, initiating in case of need the controlled removal of the beam by the LHC Beam Dumping System. The Beam Permit is encoded as a specific frequency traveling over a more than 30 km long network of optical fibers all around the LHC ring. The progressive degradation of the optical fibers and the aging of electronics affect the decoding of the Beam Permit, thus potentially resulting in an undesired beam dump event and by this reduce the machine availability. Commercial off-the-shelf SFP transceivers were studied with the aim to improve the performance of the optical transmission of the Beam Permit Network. This paper describes the tests carried out in the LHC accelerator to evaluate the selected SFP transceivers and it reports the results of the test loop reaction time measurements during operation. The use of SFPs to optically transmit safety critical signals is being considered as an interesting option not only for the planned major upgrade of the BIS for the HL-LHC era but also for other protection systems

    Phase Advance Interlocking Throughout the Whole LHC Cycle

    No full text
    Each beam of CERN's Large Hadron Collider (LHC) stores 360 MJ at design energy and design intensity. In the unlikely event of an asynchronous beam dump, not all particles would be extracted immediately. They would still take one turn around the ring, oscillating with potentially high amplitudes. In case the beam would hit one of the experimental detectors or the collimators close to the interaction points, severe damage could occur. In order to minimize the risk during such a scenario, a new interlock system was put in place in 2016. This system guarantees a phase advance of zero degrees (within tolerances) between the extraction kicker and the interaction point. This contribution describes the motivation for this new system as well as the technical implementation and the strategies used to derive appropriate tolerances to allow sufficient protection without risking false beam dump triggers

    Smooth Migration of CERN Post Mortem Service to a Horizontally Scalable Service

    No full text
    The Post Mortem service for CERNs accelerator complex stores and analyses transient data recordings of various equipment systems following certain events, like a beam dump or magnet quenches. The main purpose of this framework is to provide fast and reliable diagnostic to the equipment experts and operation crews to decide whether accelerator operation can continue safely or whether an intervention is required. While the Post Mortem System was initially designed to serve CERNs Large Hadron Collider (LHC), the scope has been rapidly extended to include as well External Post Operational Checks and Injection Quality Checks in the LHC and its injector complex. These new use cases impose more stringent time-constraints on the storage and analysis of data, calling to migrate the system towards better scalability in terms of storage capacity as well as I/O throughput. This paper presents an overview on the current service, the ongoing investigations and plans towards a scalable data storage solution and API, as well as the proposed strategy to ensure an entirely smooth transition for the current Post Mortem users

    Renovation and extension of supervision software leveraging reactive streams

    No full text
    Inspired by the recent developments of reactive programming and the ubiquity of the concept of streams in modern software industry, we assess the relevance of a reactive streams solution in the context of accelerator controls. The promise of reactive streams, to govern the exchange of data across asynchronous boundaries at a rate sustainable for both the sender and the receiver, is alluring to most data-centric processes of CERN's accelerators. Taking advantage of the renovation of one key software piece of our supervision layer, the Beam Interlock System GUI, we look at the architecture, design and implementation of a reactive streams based solution. Additionally, we see how this model allows us to re-use components and contributes naturally to the extension of our tool set. Lastly, we detail what hindered our progression and how our solution can be taken further

    Streaming Pool - managing long-living reactive streams for Java

    No full text
    A common use case in accelerator control systems is subscribing to many properties and multiple devices and combine data from this. A new technology which got standardized during recent years in software industry are so-called reactive streams. Libraries implementing this standard provide a rich set of operators to manipulate, combine and subscribe to streams of data. However, the usual focus of such streaming libraries are applications in which those streams complete within a limited amount of time or collapse due to errors. On the other hand, in the case of a control systems we want to have those streams live for a very long time (ideally infinitely) and handle errors gracefully. In this paper we describe an approach which allows two reactive stream styles: ephemeral and long-living. This allows the developers to profit from both, the extensive features of reactive stream libraries and keeping the streams alive continuously. Further plans and ideas are also discussed

    Data acquisition and supervision systems for the HL-LHC quench protection system - part I the hardware

    No full text
    Protection of the superconducting circuits of the High Luminosity Upgrade of the LHC project (HL-LHC) will be ensured by a new generation of quench detection systems and various quench protection systems for the superconducting circuits and magnets.The HL-LHC quench detection systems serve as well as high-performance data acquisition systems, that also provide essential input for the automatic analysis of events such as a superconducting magnet quench.The supervision of the quench protection systems required the development of data acquisition and monitoring systems adapted to the specific characteristics of this equipment. Of particular importance are the protection device supervision units (PDSU), which are monitoring and interlocking the quench heater circuits and the Coupling Loss Induced Quench (CLIQ) systems.All data acquisition and monitoring systems use Ethernet-based communication with precise timing instead of a classic serial fieldbus solution. This approach ensures the required data transfer rates and time synchronisation.The contribution will discuss the specific functional requirements, the status of development and the results of extensive system validation testing. It will also report on the system integration and the preparation for the first deployment in the upcoming IT-String project

    Second Generation LHC Analysis Framework: Workload-based and User-oriented Solution

    No full text
    Consolidation and upgrades of accelerator equipment during the first long LHC shutdown period enabled particle collisions at energy levels almost twice higher compared to the first operational phase. Consequently, the software infrastructure providing vital information for machine operation and its optimisation needs to be updated to keep up with the challenges imposed by the increasing amount of collected data and the complexity of analysis. Current tools, designed more than a decade ago, have proven their reliability by significantly outperforming initially provisioned workloads, but are unable to scale efficiently to satisfy the growing needs of operators and hardware experts. In this paper we present our progress towards the development of a new workload-driven solution for LHC transient data analysis, based on identified user requirements. An initial setup and study of modern data storage and processing engines appropriate for the accelerator data analysis was conducted. First simulations of the proposed novel partitioning and replication approach, targeting a highly efficient service for heterogeneous analysis requests, were designed and performed

    Towards a Second Generation Data Analysis Framework for LHC Transient Data Recording

    No full text
    During the last two years, CERNs Large Hadron Collider (LHC) and most of its equipment systems were upgraded to collide particles at an energy level twice higher compared to the first operational period between 2010 and 2013. System upgrades and the increased machine energy represent new challenges for the analysis of transient data recordings, which have to be both dependable and fast. With the LHC having operated for many years already, statistical and trend analysis across the collected data sets is a growing requirement, highlighting several constraints and limitations imposed by the current software and data storage ecosystem. Based on several analysis use-cases, this paper highlights the most important aspects and ideas towards an improved, second generation data analysis framework to serve a large variety of equipment experts and operation crews in their daily work

    Summary of the Post-Long Shutdown 2 LHC Hardware Commissioning Campaign

    No full text
    In this contribution we provide a summary of the LHC hardware commissioning campaign following the second CERN Long Shutdown (LS2), initially targeting the nominal LHC energy of 7 TeV. A summary of the test procedures and tools used for testing the LHC superconducting circuits is given, together with statistics on the successful test execution. The paper then focuses on the experience and observations during the main dipole training campaign, describing the encountered problems, the related analysis and mitigation measures, ultimately leading to the decision to reduce the energy target to 6.8 TeV. The re-commissioning of two powering sectors, following the identified problems, is discussed in detail. The paper concludes with an outlook to the future hardware commissioning campaigns, discussing the lessons learnt and possible strategies moving forward
    corecore