13 research outputs found

    A scalable monitoring for the CMS Filter Farm based on elasticsearch

    Get PDF
    A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured information can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central" es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioning for LHC Run 2

    A New Event Builder for CMS Run II

    Get PDF
    Abstract. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system

    SnapFuzz: High-throughput fuzzing of network applications

    No full text
    In recent years, fuzz testing has benefited from increased com- putational power and important algorithmic advances, leading to systems that have discovered many critical bugs and vulnerabilities in production software. Despite these successes, not all applications can be fuzzed efficiently. In particular, stateful applications such as network protocol implementations are constrained by a low fuzzing throughput and the need to develop complex fuzzing harnesses that involve custom time delays and clean-up scripts. In this paper, we present SnapFuzz, a novel fuzzing framework for network applications. SnapFuzz offers a robust architecture that transforms slow asynchronous network communication into fast synchronous communication, snapshots the target at the latest point at which it is safe to do so, speeds up file operations by redirecting them to a custom in-memory filesystem, and removes the need for many fragile modifications, such as configuring time delays or writing clean-up scripts. Using SnapFuzz, we fuzzed five popular networking applications: LightFTP, TinyDTLS, Dnsmasq, LIVE555 and Dcmqrscp. We report impressive performance speedups of 62.8 x, 41.2 x, 30.6 x, 24.6 x, and 8.4 x, respectively, with significantly simpler fuzzing harnesses in all cases. Due to its advantages, SnapFuzz has also found 12 extra crashes compared to AFLNet in these applications

    FreeDA: deploying incompatible stock dynamic analyses in production via multi-version execution

    No full text
    Dynamic analyses such as those implemented by compiler sanitizers and Valgrind are effective at finding and diagnosing challenging bugs and security vulnerabilities. However, most analyses cannot be combined on the same program execution, and they incur a high overhead, which typically prevents them from being used in production. This paper addresses the ambitious goal of running concurrently multiple incompatible stock dynamic analysis tools in production, without requiring any modifications to the tools themselves or adding significant runtime overhead to the deployed system. This is accomplished using multi-version execution, in which the dynamic analyses are run concurrently with the native version, all on the same program execution. We implement our approach in a system called FreeDA and show that it is applicable to several common scenarios, involving network servers and interactive applications. In particular, we show how incompatible stock dynamic analyses implemented by Clang’s sani- tizers and Valgrind can be used to check high-performance servers such as Memcached, Nginx and Redis, and interactive applications such as Git, HTop and OpenSSH

    A DSL approach to reconcile equivalent divergent program executions

    No full text
    Multi-Version Execution (MVE) deploys multiple versions of the same program, typically synchronizing their execution at the level of system calls. By default, MVE requires all deployed versions to issue the same sequence of system calls, which limits the types of versions which can be deployed. In this paper, we propose a Domain-Specific Language (DSL) to reconcile expected divergences between different program versions deployed through MVE. We evaluate the DSL by adding it to an existing MVE system (Varan) and testing it via three scenarios: (1) deploying the same program under different configurations, (2) deploying different releases of the same program, and (3) deploying dynamic analyses in parallel with the native execution. We also present an algorithm to automatically extract DSL rules from pairs of system call traces. Our results show that each scenario requires a small number of simple rules (at most 14 rules in each case) and that writing DSL rules can be partially automated

    ARGOeu/argo-web-api: Version 1.9.1

    No full text
    Added Implement filter recomputations by date and report Implement weights feed resource Show list of problematic endpoints Get user information List tenant users Remove user Refresh User's token Update user in tenant Tenant create user Provide topology feed parameters Provide postman tests for results Fixes Minor fixes and cleanups in weights resource Update to latest docusaurus Minor fixes in topology/downtime docs Fix feeds routing issue Changed Remove name,id namespacing from daily downtimes Migrate argo-web-api docs to docusaurus2Added Implement filter recomputations by date and report Implement weights feed resource Show list of problematic endpoints Get user information List tenant users Remove user Refresh User's token Update user in tenant Tenant create user Provide topology feed parameters Provide postman tests for results Fixes Minor fixes and cleanups in weights resource Update to latest docusaurus Minor fixes in topology/downtime docs Fix feeds routing issue Changed Remove name,id namespacing from daily downtimes Migrate argo-web-api docs to docusaurus21.9.

    SaBRe: load-time selective binary rewriting

    No full text
    Binary rewriting consists in disassembling a program to modify its instructions. However, existing solutions suffer from shortcomings in terms of soundness and performance. We present SaBRe, a load-time system for selective binary rewriting. SaBRe rewrites specific constructs—particularly system calls and functions—when the program is loaded into memory, and intercepts them using plugins through a simple API. We also discuss the theoretical underpinnings of disassembling and rewriting. We developed two backends—for x86_64 and RISC-V—which were used to implement three plugins: a fast system call tracer, a multi-version executor, and a fault injector. Our evaluation shows that SaBRe imposes little overhead, typically below 3%

    File-Based Data Flow in the CMS Filter Farm

    No full text
    During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2

    Online data handling and storage at the CMS experiment

    No full text
    During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small 'documents' using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ~62 sources produced with an aggregate rate of ~2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system
    corecore