88 research outputs found

    Optimization and performance measurements of ROOT-based data formats in the ATLAS experiment

    No full text
    The interplay of the ATLAS persistent event data model and the ROOT based I/O backend was studied in order to improve the read performance and disk size of ATLAS data formats for simulation, reconstruction and data analysis. The enabling of several native ROOT features, such as basket ordering and the tree cache, has lead to significant improvements in dedicated test setups using local disks, file servers managed by dCache, DPM and xrootd. After implementation in the ATLAS Athena framework, tests in more realistic environments were done. The functionality of the improvements and results from the performance tests are reported

    New Developments in FormCalc 8.4

    Full text link
    We present new developments in FeynArts 3.9 and FormCalc 8.4, in particular the MSSMCT model file including the complete one-loop renormalization, vectorization/parallelization issues, and the interface to the Ninja library for tensor reduction.Comment: 7 pages, proceedings contribution to Loops & Legs 2014, April 27-May 2, 2014, Weimar, German

    Authenticated storage using small trusted hardware

    Get PDF
    A major security concern with outsourcing data storage to third-party providers is authenticating the integrity and freshness of data. State-of-the-art software-based approaches require clients to maintain state and cannot immediately detect forking attacks, while approaches that introduce limited trusted hardware (e.g., a monotonic counter) at the storage server achieve low throughput. This paper proposes a new design for authenticating data storage using a small piece of high-performance trusted hardware attached to an untrusted server. The proposed design achieves significantly higher throughput than previous designs. The server-side trusted hardware allows clients to authenticate data integrity and freshness without keeping any mutable client-side state. Our design achieves high performance by parallelizing server-side authentication operations and permitting the untrusted server to maintain caches and schedule disk writes, while enforcing precise crash recovery and write access control

    Streaming visualisation of quantitative mass spectrometry data based on a novel raw signal decomposition method

    Get PDF
    As data rates rise, there is a danger that informatics for high-throughput LC-MS becomes more opaque and inaccessible to practitioners. It is therefore critical that efficient visualisation tools are available to facilitate quality control, verification, validation, interpretation, and sharing of raw MS data and the results of MS analyses. Currently, MS data is stored as contiguous spectra. Recall of individual spectra is quick but panoramas, zooming and panning across whole datasets necessitates processing/memory overheads impractical for interactive use. Moreover, visualisation is challenging if significant quantification data is missing due to data-dependent acquisition of MS/MS spectra. In order to tackle these issues, we leverage our seaMass technique for novel signal decomposition. LC-MS data is modelled as a 2D surface through selection of a sparse set of weighted B-spline basis functions from an over-complete dictionary. By ordering and spatially partitioning the weights with an R-tree data model, efficient streaming visualisations are achieved. In this paper, we describe the core MS1 visualisation engine and overlay of MS/MS annotations. This enables the mass spectrometrist to quickly inspect whole runs for ionisation/chromatographic issues, MS/MS precursors for coverage problems, or putative biomarkers for interferences, for example. The open-source software is available from http://seamass.net/viz/
    corecore