5 research outputs found

    The HSF Conditions Database Reference Implementation

    Get PDF
    Conditions data is the subset of non-event data that is necessary to process event data. It poses a unique set of challenges, namely a heterogeneous structure and high access rates by distributed computing. The HSF Conditions Databases activity is a forum for cross-experiment discussions inviting as broad a participation as possible. It grew out of the HSF Community White Paper work to study conditions data access, where experts from ATLAS, Belle II, and CMS converged on a common language and proposed a schema that represents best practice. Following discussions with a broader community, including NP as well as HEP experiments, a core set of use cases, functionality and behaviour was defined with the aim to describe a core conditions database API. This paper will describe the reference implementation of both the conditions database service and the client which together encapsulate HSF best practice conditions data handling.Django was chosen for the service implementation, which uses an ORM instead of the direct use of SQL for all but one method. The simple relational database schema to organise conditions data is implemented in PostgreSQL. The task of storing conditions data payloads themselves is outsourced to any POSIX-compliant filesystem, allowing for transparent relocation and redundancy. Crucially this design provides a clear separation between retrieving the metadata describing which conditions data are needed for a data processing job, and retrieving the actual payloads from storage. The service deployment using Helm on OKD will be described together with scaling tests and operations experience from the sPHENIX experiment running more than 25k cores at BNL

    Implementation of ACTS into sPHENIX track reconstruction

    Full text link
    sPHENIX is a high energy nuclear physics experiment under construction at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory (BNL). The primary physics goals of sPHENIX are to study the quark-gluon-plasma, as well as the partonic structure of protons and nuclei, by measuring jets, their substructure, and heavy flavor hadrons in pp++pp, pp+Au, and Au+Au collisions. sPHENIX will collect approximately 300 PB of data over three run periods, to be analyzed using available computing resources at BNL; thus, performing track reconstruction in a timely manner is a challenge due to the high occupancy of heavy ion collision events. The sPHENIX experiment has recently implemented the A Common Tracking Software (ACTS) track reconstruction toolkit with the goal of reconstructing tracks with high efficiency and within a computational budget of 5 seconds per minimum bias event. This paper reports the performance status of ACTS as the default track fitting tool within sPHENIX, including discussion of the first implementation of a time projection chamber geometry within ACTS

    Nuclear Physics Exascale Requirements Review: An Office of Science review sponsored jointly by Advanced Scientific Computing Research and Nuclear Physics, June 15 - 17, 2016, Gaithersburg, Maryland

    No full text
    corecore