72 research outputs found

    DUNE Database Development

    Get PDF
    The DUNE experiment will produce vast amounts of metadata, which describe the data coming from the read-out of the primary DUNE detectors. Various databases will make up the overall DB architecture for this metadata. ProtoDUNE at CERN is the largest existing prototype for DUNE and serves as a testing ground for - among other things - possible database solutions for DUNE. The subset of all metadata that is accessed during offline data reconstruction and analysis is referred to as ‘conditions data’ and it is stored in a dedicated database. As offline data reconstruction and analysis will be deployed on HTC and HPC resources, conditions data is expected to be accessed at very high rates. It is therefore crucial to store it in a granularity that matches the expected access patterns allowing for extensive caching. This requires a good understanding of the sources and use cases of conditions data. This contribution will briefly summarize the database architecture deployed at ProtoDUNE and explain the various sources of conditions data. We will present how the conditions data is retrieved and streamed from the databases and how it is handled to match expected access patterns

    The HSF Conditions Database Reference Implementation

    Get PDF
    Conditions data is the subset of non-event data that is necessary to process event data. It poses a unique set of challenges, namely a heterogeneous structure and high access rates by distributed computing. The HSF Conditions Databases activity is a forum for cross-experiment discussions inviting as broad a participation as possible. It grew out of the HSF Community White Paper work to study conditions data access, where experts from ATLAS, Belle II, and CMS converged on a common language and proposed a schema that represents best practice. Following discussions with a broader community, including NP as well as HEP experiments, a core set of use cases, functionality and behaviour was defined with the aim to describe a core conditions database API. This paper will describe the reference implementation of both the conditions database service and the client which together encapsulate HSF best practice conditions data handling.Django was chosen for the service implementation, which uses an ORM instead of the direct use of SQL for all but one method. The simple relational database schema to organise conditions data is implemented in PostgreSQL. The task of storing conditions data payloads themselves is outsourced to any POSIX-compliant filesystem, allowing for transparent relocation and redundancy. Crucially this design provides a clear separation between retrieving the metadata describing which conditions data are needed for a data processing job, and retrieving the actual payloads from storage. The service deployment using Helm on OKD will be described together with scaling tests and operations experience from the sPHENIX experiment running more than 25k cores at BNL

    Search for BSM A/H → τ τ in the Fully Hadronic Decay Channel with ATLAS

    No full text
    In 2012, a scalar boson was found at CERN that is consistent with the properties of the Higgs boson predicted by the Standard Model (SM) of particle physics. Some theories, in particular supersymmetric models, also predict the existence of additional heavier neutral Higgs bosons. The decays of these heavy Higgs bosons to a pair of Ï„ -leptons can be significant because of the relatively large mass of the Ï„ -lepton and additional effects of two-Higgs-doublet models that can enhance the coupling to down-type fermions. Searches for heavy neutral Higgs bosons, A/H , as predicted by the MSSM are performed using a 139 ifb dataset recorded at the ATLAS detector between 2015 and 2018 at a centre-of-mass energy of 13 TeV. The particle is assumed to decay into a pair of Ï„ -leptons and the all-hadronic final state is considered for this search. The results are interpreted in different benchmark scenarios, such as the hMSSM. No excess over the SM background was observed.2022-11-0

    Experimental Results in the Extended Higgs Sector

    No full text
    An Extended Higgs sector could solve many problems of the Standard Model and is therefore investigated by several analyses at ATLAS. Selected searches for additional Higgs bosons and other predicted effects of an ex- tended Higgs sector based on 139 fb −1 of data taken between 2015 and 2018 at a center-of-mass energy of 13 TeV are summarized

    Search for BSM A/H → τ τ in the Fully Hadronic Decay Channel with ATLAS

    No full text

    Experimental results in the Extended Higgs sector

    No full text
    The latest experimental results from ATLAS in the Extended Higgs sector are presented

    DUNE Database Development

    No full text
    The DUNE experiment will produce vast amounts of metadata, which describe the data coming from the read-out of the primary DUNE detectors. Various databases will make up the overall DB architecture for this metadata. ProtoDUNE at CERN is the largest existing prototype for DUNE and serves as a testing ground for - among other things - possible database solutions for DUNE. The subset of all metadata that is accessed during offline data reconstruction and analysis is referred to as ‘conditions data’ and it is stored in a dedicated database. As offline data reconstruction and analysis will be deployed on HTC and HPC resources, conditions data is expected to be accessed at very high rates. It is therefore crucial to store it in a granularity that matches the expected access patterns allowing for extensive caching. This requires a good understanding of the sources and use cases of conditions data. This contribution will briefly summarize the database architecture deployed at ProtoDUNE and explain the various sources of conditions data. We will present how the conditions data is retrieved and streamed from the databases and how it is handled to match expected access patterns

    The HSF Conditions Database Reference Implementation

    No full text
    Conditions data is the subset of non-event data that is necessary to process event data. It poses a unique set of challenges, namely a heterogeneous structure and high access rates by distributed computing. The HSF Conditions Databases activity is a forum for cross-experiment discussions inviting as broad a participation as possible. It grew out of the HSF Community White Paper work to study conditions data access, where experts from ATLAS, Belle II, and CMS converged on a common language and proposed a schema that represents best practice. Following discussions with a broader community, including NP as well as HEP experiments, a core set of use cases, functionality and behaviour was defined with the aim to describe a core conditions database API. This paper will describe the reference implementation of both the conditions database service and the client which together encapsulate HSF best practice conditions data handling.Django was chosen for the service implementation, which uses an ORM instead of the direct use of SQL for all but one method. The simple relational database schema to organise conditions data is implemented in PostgreSQL. The task of storing conditions data payloads themselves is outsourced to any POSIX-compliant filesystem, allowing for transparent relocation and redundancy. Crucially this design provides a clear separation between retrieving the metadata describing which conditions data are needed for a data processing job, and retrieving the actual payloads from storage. The service deployment using Helm on OKD will be described together with scaling tests and operations experience from the sPHENIX experiment running more than 25k cores at BNL

    Benthic Estuarine Assemblages of the Southeastern Brazil Marine Ecoregion (SBME)

    No full text
    We assess the current knowledge of the benthic assemblages in the Southeastern Brazil Marine Ecoregion (SBME), which extends for approximately 1200 km of coastline and includes seven major estuarine systems from Guanabara Bay in Rio de Janeiro to Babitonga Bay (or Sao Francisco do Sul) in Santa Catarina. The high ecosystem diversity of SBME putatively accounts for the high levels of endemism of the regional marine invertebrate fauna. However, until more taxonomical and biogeographical evidence is available, the SBME should be treated as a working biogeographical hypothesis rather than a cohesive unit identified by endemic fauna. As a consequence of urban, agricultural, and industrial development, the coastal areas from the SBME have been the most altered in the country over the last 500 years. Some of the largest cities and busiest harbors of the country are in or near the regional estuarine areas. The rapid environmental changes over the last several decades do not allow for the assessment if current similarities and dissimilarities in the benthic assemblages express pristine conditions or if they are already the result of major human interventions, especially in the case of the Guanabara, Sepetiba, and Santos estuaries.Univ Fed Parana, Ctr Estudos Mar, Pontal Do Sul, Parana, BrazilUniv Fed Sao Paulo, Inst Mar, Santos, SP, BrazilUniv Catolica Norte, Millennium Nucleus Ecol & Sustainable Management, Fac Ciencias Mar, Dept Biol Marina, Coquimbo, ChileUniv Estadual Paulista, Inst Biociencias, Campus Litoral Paulista, Sao Vicente, SP, BrazilUniv Fed Fluminense, Dept Biol Marinha, Campus Valonguinho, Niteroi, RJ, BrazilUniv Estadual Paulista, Inst Biociencias, Campus Litoral Paulista, Sao Vicente, SP, Brazi
    • …
    corecore