61 research outputs found
Testing performance of standards-based protocols in DPM
In the interests of the promotion of the increased use of non-proprietary protocols in grid storage systems, we perform tests on the performance of WebDAV and pNFS transport with the DPM storage solution. We find that the standards-based protocols behave similarly to the proprietary standards currently in use, despite encountering some issues with the state of the implementation itself. We thus conclude that there is no performance-based reason to avoid using such protocols for data management in future
Analysis and improvement of data-set level file distribution in Disk Pool Manager
Of the three most widely used implementations of the WLCG Storage Element specification, Disk Pool Manager[1, 2] (DPM) has the simplest implementation of file placement balancing (StoRM doesn't attempt this, leaving it up to the underlying filesystem, which can be very sophisticated in itself). DPM uses a round-robin algorithm (with optional filesystem weighting), for placing files across filesystems and servers. This does a reasonable job of evenly distributing files across the storage array provided to it. However, it does not offer any guarantees of the evenness of distribution of that subset of files associated with a given "dataset" (which often maps onto a "directory" in the DPM namespace (DPNS)). It is useful to consider a concept of "balance", where an optimally balanced set of files indicates that the files are distributed evenly across all of the pool nodes. The best case performance of the round robin algorithm is to maintain balance, it has no mechanism to improve balance.<p></p>
In the past year or more, larger DPM sites have noticed load spikes on individual disk servers, and suspected that these were exacerbated by excesses of files from popular datasets on those servers. We present here a software tool which analyses file distribution for all datasets in a DPM SE, providing a measure of the poorness of file location in this context. Further, the tool provides a list of file movement actions which will improve dataset-level file distribution, and can action those file movements itself. We present results of such an analysis on the UKI-SCOTGRID-GLASGOW Production DPM
Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks
Experiments in particle physics produce enormous quantities of data that must
be analyzed and interpreted by teams of physicists. This analysis is often
exploratory, where scientists are unable to enumerate the possible types of
signal prior to performing the experiment. Thus, tools for summarizing,
clustering, visualizing and classifying high-dimensional data are essential. In
this work, we show that meaningful physical content can be revealed by
transforming the raw data into a learned high-level representation using deep
neural networks, with measurements taken at the Daya Bay Neutrino Experiment as
a case study. We further show how convolutional deep neural networks can
provide an effective classification filter with greater than 97% accuracy
across different classes of physics events, significantly better than other
machine learning approaches
Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in sqrt(s) = 7 TeV pp collisions with the ATLAS detector
A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb-1 of sqrt(s) = 7 TeV proton-proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results
- âŠ