78 research outputs found

    ROOT Status and Future Developments

    Full text link
    In this talk we will review the major additions and improvements made to the ROOT system in the last 18 months and present our plans for future developments. The additons and improvements range from modifications to the I/O sub-system to allow users to save and restore objects of classes that have not been instrumented by special ROOT macros, to the addition of a geometry package designed for building, browsing, tracking and visualizing detector geometries. Other improvements include enhancements to the quick analysis sub-system (TTree::Draw()), the addition of classes that allow inter-file object references (TRef, TRefArray), better support for templated and STL classes, amelioration of the Automatic Script Compiler and the incorporation of new fitting and mathematical tools. Efforts have also been made to increase the modularity of the ROOT system with the introduction of more abstract interfaces and the development of a plug-in manager. In the near future, we intend to continue the development of PROOF and its interfacing with GRID environments. We plan on providing an interface between Geant3, Geant4 and Fluka and the new geometry package. The ROOT GUI classes will finally be available on Windows and we plan to release a GUI inspector and builder. In the last year, ROOT has drawn the endorsement of additional experiments and institutions. It is now officially supported by CERN and used as key I/O component by the LCG project.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, MSWord, pSN MOJT00

    The PROOF Distributed Parallel Analysis Framework based on ROOT

    Full text link
    The development of the Parallel ROOT Facility, PROOF, enables a physicist to analyze and understand much larger data sets on a shorter time scale. It makes use of the inherent parallelism in event data and implements an architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. The system provides transparent and interactive access to gigabytes today. Being part of the ROOT framework PROOF inherits the benefits of a performant object storage system and a wealth of statistical and visualization tools. This paper describes the key principles of the PROOF architecture and the implementation of the system. We will illustrate its features using a simple example and present measurements of the scalability of the system. Finally we will discuss how PROOF can be interfaced and make use of the different Grid solutions.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, CA, USA, March 2003, 5 pages, LaTeX, 4 eps figures. PSN TULT00

    Enhancing data integrity and resilience: Extending the CERN backup system with a tape-based backend

    Get PDF
    The CERN IT Department is responsible for ensuring the integrity and security of data stored in the IT Storage Services. General storage backends such as EOS, CERNBox and CephFS are used to store data for a wide range of use cases for all stakeholders at CERN, including experiment project spaces and user home directories. In recent years a backup system, cback, was developed based on the open source backup program restic. cback is currently used to backup about 2.5 billion files and 18PB stored on disks in the CERN Computing Center. To significantly increase the reliability and security of the backups and reduce the storage costs, by limiting the amount of data on disk, we have added a tape storage backend to cback. With this addition cback can reliably be extended to new use cases, like backing up any local mountable file system, such as EOS, CephFS, NFS or DFS. In this paper we will describe the architecture and implementation of cback with the new tape storage backend and a number of developments planned for the near future

    Software Challenges For HL-LHC Data Analysis

    Full text link
    The high energy physics community is discussing where investment is needed to prepare software for the HL-LHC and its unprecedented challenges. The ROOT project is one of the central software players in high energy physics since decades. From its experience and expectations, the ROOT team has distilled a comprehensive set of areas that should see research and development in the context of data analysis software, for making best use of HL-LHC's physics potential. This work shows what these areas could be, why the ROOT team believes investing in them is needed, which gains are expected, and where related work is ongoing. It can serve as an indication for future research proposals and cooperations

    ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    Full text link
    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way

    CERN Tape Archive Run 3 Production Experience CTA Tier-0 service performance during the start of the LHC Run 3 and the various lessons learnt

    Get PDF
    The EOS disk + CERN Tape Archive (EOSCTA) service is CERN’s primary physics data long-term storage and archival solution for LHC Run 3. It entered production at CERN during summer 2020 and has since been serving all the LHC and non-LHC workflows involving archival to-and retrieval from tape. The CTA system is a complete redesign of the previous tape software, tape cache and tape workflows, which will need to scale to the data rate requirements of the present LHC activity period, as well as the one after. At the time of writing it has already set new records for monthly tape archival volume at CERN and reached write efficiencies equalling those demonstrated during earlier data challenges

    The CERN Tape Archive Beyond CERN An Open Source Data Archival System for HEP

    Get PDF
    The CERN Tape Archive (CTA) is the successor to CASTOR and the tape backend to EOS. It was designed to meet the needs of archival storage of data from LHC Run–3 and other experimental programmes at CERN. In the wider Worldwide LHC Computing Grid (WLCG), the tape software landscape is quite heterogeneous, but we are entering a period of consolidation. A number of sites have reevaluated their options and are choosing CTA for their future tape archival storage needs. However, CTA’s original mandate imposed several design constraints which are not necessarily optimal for external sites. In this contribution, we show how CERN has engaged with the wider HEP community and collaborated on improvements which allow CTA to be adopted more widely. We detail community contributions to allow CTA to be used as the tape backend for dCache; to facilitate migrations from other tape systems such as OSM and Enstore; and improvements to CTA building and packaging to remove CERN-specific dependencies and to allow easy distribution to external sites. Finally, we present a roadmap for the community edition of CTA

    CERN openlab Summer Student Lightning Talks

    No full text

    openlab summer students' lightning talks 1

    No full text
    • …
    corecore