73 research outputs found

    Data Access for LIGO on the OSG

    Full text link
    During 2015 and 2016, the Laser Interferometer Gravitational-Wave Observatory (LIGO) conducted a three-month observing campaign. These observations delivered the first direct detection of gravitational waves from binary black hole mergers. To search for these signals, the LIGO Scientific Collaboration uses the PyCBC search pipeline. To deliver science results in a timely manner, LIGO collaborated with the Open Science Grid (OSG) to distribute the required computation across a series of dedicated, opportunistic, and allocated resources. To deliver the petabytes necessary for such a large-scale computation, our team deployed a distributed data access infrastructure based on the XRootD server suite and the CernVM File System (CVMFS). This data access strategy grew from simply accessing remote storage to a POSIX-based interface underpinned by distributed, secure caches across the OSG.Comment: 6 pages, 3 figures, submitted to PEARC1

    Any Data, Any Time, Anywhere: Global Data Access for Science

    Full text link
    Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data access becomes quite difficult when many independent storage sites are involved because users are burdened with learning the intricacies of accessing each system and keeping careful track of data location. We present an alternate approach: the Any Data, Any Time, Anywhere infrastructure. Combining several existing software products, AAA presents a global, unified view of storage systems - a "data federation," a global filesystem for software delivery, and a workflow management system. We present how one HEP experiment, the Compact Muon Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium on Big Data Computing (BDC) 201

    SciTokens: Capability-Based Secure Access to Remote Scientific Data

    Full text link
    The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Efficient HTTP based I/O on very large datasets for high performance computing with the libdavix library

    Full text link
    Remote data access for data analysis in high performance computing is commonly done with specialized data access protocols and storage systems. These protocols are highly optimized for high throughput on very large datasets, multi-streams, high availability, low latency and efficient parallel I/O. The purpose of this paper is to describe how we have adapted a generic protocol, the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative for high performance I/O and data analysis applications in a global computing grid: the Worldwide LHC Computing Grid. In this work, we first analyze the design differences between the HTTP protocol and the most common high performance I/O protocols, pointing out the main performance weaknesses of HTTP. Then, we describe in detail how we solved these issues. Our solutions have been implemented in a toolkit called davix, available through several recent Linux distributions. Finally, we describe the results of our benchmarks where we compare the performance of davix against a HPC specific protocol for a data analysis use case.Comment: Presented at: Very large Data Bases (VLDB) 2014, Hangzho

    Boosting Performance of Data-intensive Analysis Workflows with Distributed Coordinated Caching

    Get PDF
    Data-intensive end-user analyses in high energy physics require high data throughput to reach short turnaround cycles. This leads to enormous challenges for storage and network infrastructure, especially when facing the tremendously increasing amount of data to be processed during High-Luminosity LHC runs. Including opportunistic resources with volatile storage systems into the traditional HEP computing facilities makes this situation more complex. Bringing data close to the computing units is a promising approach to solve throughput limitations and improve the overall performance. We focus on coordinated distributed caching by coordinating workows to the most suitable hosts in terms of cached files. This allows optimizing overall processing efficiency of data-intensive workows and efficiently use limited cache volume by reducing replication of data on distributed caches. We developed a NaviX coordination service at KIT that realizes coordinated distributed caching using XRootD cache proxy server infrastructure and HTCondor batch system. In this paper, we present the experience gained in operating coordinated distributed caches on cloud and HPC resources. Furthermore, we show benchmarks of a dedicated high throughput cluster, the Throughput-Optimized Analysis-System (TOpAS), which is based on the above-mentioned concept

    Cloud native approach for Machine Learning as a Service for High Energy Physics

    Get PDF
    Nowadays Machine Learning (ML) techniques are widely adopted in many areas of High Energy Physics (HEP) and certainly will play a significant role also in the upcoming High-Luminosity LHC (HL-LHC) upgrade foreseen at CERN. A huge amount of data will be produced by LHC and collected by the experiments, facing challenges at the exascale. Here, we present Machine Learning as a Service solution for HEP (MLaaS4HEP) to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. With the new version of MLaaS4HEP code based on uproot4, we provide new features to improve users’ experience with the framework and their workflows, e.g. users can provide some preprocessing operations to be applied to ROOT data before starting the ML pipeline. Then our approach is extended to use local and cloud resources via HTTP proxy which allows physicists to submit their workflows using the HTTP protocol. We discuss how this pipeline could be enabled in the INFN Cloud Provider and what could be the final architecture

    Prototype of a cloud native solution of Machine Learning as Service for HEP

    Get PDF
    To favor the usage of Machine Learning (ML) techniques in High-Energy Physics (HEP) analyses it would be useful to have a service allowing to perform the entire ML pipeline (in terms of reading the data, training a ML model, and serving predictions) directly using ROOT files of arbitrary size from local or remote distributed data sources. The MLaaS4HEP framework aims to provide such kind of solution. It was successfully validated with a CMS physics use case which gave important feedback about the needs of analysts. For instance, we introduced the possibility for the user to provide pre-processing operations, such as defining new branches and applying cuts. To provide a real service for the user and to integrate it into the INFN Cloud, we started working on MLaaS4HEP cloudification. This would allow to use cloud resources and to work in a distributed environment. In this work, we provide updates on this topic, and in particular, we discuss our first working prototype of the service. It includes an OAuth2 proxy server as authentication/authorization layer, a MLaaS4HEP server, an XRootD proxy server for enabling access to remote ROOT data, and the TensorFlow as a Service (TFaaS) service in charge of the inference phase. With this architecture the user is able to submit ML pipelines, after being authenticated and authorized, using local or remote ROOT files simply using HTTP call

    Jet Momentum Resolution for the CMS Experiment and Distributed Data Caching Strategies

    Get PDF
    Accurately measured jets are mandatory for precision measurements of the Standard Model of particle physics as well as for searches for new physics. The increased instantaneous luminosity and center-of-mass energy at LHC Run 2 pose challenges for pileup mitigation and the measurement of jet characteristics. This thesis concentrates on using Z + jets events to calibrate the energy scale of jets recorded by the CMS detector in 2018. Furthermore, it proposes a new procedure for determining the jet momentum resolution using Z + jets events. This procedure is expected to allow cross-checking complementary measurement approaches and increasing the accuracy of the jet momentum resolution at the CMS experiment. Data-intensive end-user analyses in High Energy Physics such as the presented calibration of jets put enormous challenges on the computing infrastructure since requiring high data throughput. Besides the particle physics analysis, this thesis also focuses on accelerating data processing within a distributed computing infrastructure via a coordinated distributed caching approach. Coordinated placement of critical data within distributed caches and matching workflows to the most suitable host in terms of cached data allows for optimizing processing efficiency. Improving the processing of data-intensive workflows aims at shortening turnaround cycles and thus deriving physics results, e.g. the jet calibration results, faster

    A Grid architectural approach applied for backward compatibility to a production system for events simulation.

    Get PDF
    Distributed systems paradigm gained in popularity during the last 15 years, thanks also to the broad diffusion of distributed frameworks proposed for the Internet plat form. In the late ’90s a new concept started to play a main role in the field of distributed computing: the Grid. This thesis presents a study related to the integration between the BaBar’s framework, an experiment belonging to the High Energy Physics field, and a grid system like the one implemented by the Italian National Institute for Nuclear Physics (INFN), the INFNGrid project, which provides support for several research domains. The main goal was to succeed in adapt an already well established system, like the one implemented into the BaBar pipeline and based on local centers not interconnected between themselves, to a kind of technology that was not ready by the time the experiment’s framework was designed. Despite this new approach was related just to some aspects of the experiment, the production of simulated events by using MonteCarlo methods, the efforts here described represent an example of how an old experiment can bridge the gap toward the Grid computing, even adopting solutions designed for more recent projects. The complete evolution of this integration will be explained starting from the earlier stages until the actual development to state the progresses achieved, presenting results that are comparable with production rates gained using the conventional BaBar’s approach, in order to examine the potentially benefits and drawbacks on a concrete case study

    Improved operation for the ALICE Tier2 Centre at GSI

    Get PDF
    • …
    corecore