4,019 research outputs found

    Evaluation of standard monitoring tools(including log analysis) for control systems at Cern

    Get PDF
    Project Specification: The goal of this Openlab Summer Student project was to assess the implications and the benefits of integrating two standard IT tools, namely Icinga and Splunkstorm with the existing production setup for monitoring and management of control systems at CERN. Icinga – an open source monitoring software based on Nagios would need to be integrated with an in-house developed WinCC OA application called MOON, that is currently used for monitoring and managing all the components that make up the control systems. Splunkstorm – a data analysis and log management online application would be used stand alone, so it didn’t need integration with other software, only understanding of features and installation procedure. Abstract: The aim of this document is to provide insights into installation procedures, key features and functionality and projected implementation effort of Icinga and Splunkstorm IT tools. Focus will be on presenting the most feasible implementation paths that surfaced once both software were well understood

    BDAQ53, a versatile pixel detector readout and test system for the ATLAS and CMS HL-LHC upgrades

    Full text link
    BDAQ53 is a readout system and verification framework for hybrid pixel detector readout chips of the RD53 family. These chips are designed for the upgrade of the inner tracking detectors of the ATLAS and CMS experiments. BDAQ53 is used in applications where versatility and rapid customization are required, such as in laboratory testing environments, test beam campaigns, and permanent setups for quality control measurements. It consists of custom and commercial hardware, a Python-based software framework, and FPGA firmware. BDAQ53 is developed as open source software with both software and firmware being hosted in a public repository.Comment: 6 pages, 6 figure

    The Joint COntrols Project Framework

    Full text link
    The Framework is one of the subprojects of the Joint COntrols Project (JCOP), which is collaboration between the four LHC experiments and CERN. By sharing development, this will reduce the overall effort required to build and maintain the experiment control systems. As such, the main aim of the Framework is to deliver a common set of software components, tools and guidelines that can be used by the four LHC experiments to build their control systems. Although commercial components are used wherever possible, further added value is obtained by customisation for HEP-specific applications. The supervisory layer of the Framework is based on the SCADA tool PVSS, which was selected after a detailed evaluation. This is integrated with the front-end layer via both OPC (OLE for Process Control), an industrial standard, and the CERN-developed DIM (Distributed Information Management System) protocol. Several components are already in production and being used by running fixed-target experiments at CERN as well as for the LHC experiment test beams. The paper will give an overview of the key concepts behind the project as well as the state of the current development and future plans.Comment: Paper from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, PDF. PSN THGT00

    The AliEn system, status and perspectives

    Full text link
    AliEn is a production environment that implements several components of the Grid paradigm needed to simulate, reconstruct and analyse HEP data in a distributed way. The system is built around Open Source components, uses the Web Services model and standard network protocols to implement the computing platform that is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The aim of this paper is to present the current AliEn architecture and outline its future developments in the light of emerging standards.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, Word, 10 figures. PSN MOAT00

    Designing Reusable Systems that Can Handle Change - Description-Driven Systems : Revisiting Object-Oriented Principles

    Full text link
    In the age of the Cloud and so-called Big Data systems must be increasingly flexible, reconfigurable and adaptable to change in addition to being developed rapidly. As a consequence, designing systems to cater for evolution is becoming critical to their success. To be able to cope with change, systems must have the capability of reuse and the ability to adapt as and when necessary to changes in requirements. Allowing systems to be self-describing is one way to facilitate this. To address the issues of reuse in designing evolvable systems, this paper proposes a so-called description-driven approach to systems design. This approach enables new versions of data structures and processes to be created alongside the old, thereby providing a history of changes to the underlying data models and enabling the capture of provenance data. The efficacy of the description-driven approach is exemplified by the CRISTAL project. CRISTAL is based on description-driven design principles; it uses versions of stored descriptions to define various versions of data which can be stored in diverse forms. This paper discusses the need for capturing holistic system description when modelling large-scale distributed systems.Comment: 8 pages, 1 figure and 1 table. Accepted by the 9th Int Conf on the Evaluation of Novel Approaches to Software Engineering (ENASE'14). Lisbon, Portugal. April 201

    From Design to Production Control Through the Integration of Engineering Data Management and Workflow Management Systems

    Full text link
    At a time when many companies are under pressure to reduce "times-to-market" the management of product information from the early stages of design through assembly to manufacture and production has become increasingly important. Similarly in the construction of high energy physics devices the collection of (often evolving) engineering data is central to the subsequent physics analysis. Traditionally in industry design engineers have employed Engineering Data Management Systems (also called Product Data Management Systems) to coordinate and control access to documented versions of product designs. However, these systems provide control only at the collaborative design level and are seldom used beyond design. Workflow management systems, on the other hand, are employed in industry to coordinate and support the more complex and repeatable work processes of the production environment. Commercial workflow products cannot support the highly dynamic activities found both in the design stages of product development and in rapidly evolving workflow definitions. The integration of Product Data Management with Workflow Management can provide support for product development from initial CAD/CAM collaborative design through to the support and optimisation of production workflow activities. This paper investigates this integration and proposes a philosophy for the support of product data throughout the full development and production lifecycle and demonstrates its usefulness in the construction of CMS detectors.Comment: 18 pages, 13 figure

    BDII DOCUMENTATION

    Get PDF
    BDII Core contains elements common for top and site BDII. The top-level BDII is an instance fetching information from site BDII
    • …
    corecore