6 research outputs found

    Optimised access to user analysis data using the gLite DPM

    Get PDF
    The ScotGrid distributed Tier-2 now provides more that 4MSI2K and 500TB for LHC computing, which is spread across three sites at Durham, Edinburgh and Glasgow. Tier-2 sites have a dual role to play in the computing models of the LHC VOs. Firstly, their CPU resources are used for the generation of Monte Carlo event data. Secondly, the end user analysis data is distributed across the grid to the site's storage system and held on disk ready for processing by physicists' analysis jobs. In this paper we show how we have designed the ScotGrid storage and data management resources in order to optimise access by physicists to LHC data. Within ScotGrid, all sites use the gLite DPM storage manager middleware. Using the EGEE grid to submit real ATLAS analysis code to process VO data stored on the ScotGrid sites, we present an analysis of the performance of the architecture at one site, and procedures that may be undertaken to improve such. The results will be presented from the point of view of the end user (in terms of number of events processed/second) and from the point of view of the site, which wishes to minimise load and the impact that analysis activity has on other users of the system

    ScotGrid: Providing an Effective Distributed Tier-2 in the LHC Era

    Get PDF
    ScotGrid is a distributed Tier-2 centre in the UK with sites in Durham, Edinburgh and Glasgow. ScotGrid has undergone a huge expansion in hardware in anticipation of the LHC and now provides more than 4MSI2K and 500TB to the LHC VOs. Scaling up to this level of provision has brought many challenges to the Tier-2 and we show in this paper how we have adopted new methods of organising the centres, from fabric management and monitoring to remote management of sites to management and operational procedures, to meet these challenges. We describe how we have coped with different operational models at the sites, where Glagsow and Durham sites are managed "in house" but resources at Edinburgh are managed as a central university resource. This required the adoption of a different fabric management model at Edinburgh and a special engagement with the cluster managers. Challenges arose from the different job models of local and grid submission that required special attention to resolve. We show how ScotGrid has successfully provided an infrastructure for ATLAS and LHCb Monte Carlo production. Special attention has been paid to ensuring that user analysis functions efficiently, which has required optimisation of local storage and networking to cope with the demands of user analysis. Finally, although these Tier-2 resources are pledged to the whole VO, we have established close links with our local physics user communities as being the best way to ensure that the Tier-2 functions effectively as a part of the LHC grid computing framework..Comment: Preprint for 17th International Conference on Computing in High Energy and Nuclear Physics, 7 pages, 1 figur

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Weight-bearing in ankle fractures: An audit of UK practice.

    No full text
    INTRODUCTION: The purpose of this national study was to audit the weight-bearing practice of orthopaedic services in the National Health Service (NHS) in the treatment of operatively and non-operatively treated ankle fractures. METHODS: A multicentre prospective two-week audit of all adult ankle fractures was conducted between July 3rd 2017 and July 17th 2017. Fractures were classified using the AO/OTA classification. Fractures fixed with syndesmosis screws or unstable fractures (>1 malleolus fractured or talar shift present) treated conservatively were excluded. No outcome data were collected. In line with NICE (The National Institute for Health and Care Excellence) criteria, "early" weight-bearing was defined as unrestricted weight-bearing on the affected leg within 3 weeks of injury or surgery and "delayed" weight-bearing as unrestricted weight-bearing permitted after 3 weeks. RESULTS: 251 collaborators from 81 NHS hospitals collected data: 531 patients were managed non-operatively and 276 operatively. The mean age was 52.6 years and 50.5 respectively. 81% of non-operatively managed patients were instructed for early weight-bearing as recommended by NICE. In contrast, only 21% of operatively managed patients were instructed for early weight-bearing. DISCUSSION: The majority of patients with uni-malleolar ankle fractures which are managed non-operatively are treated in accordance with NICE guidance. There is notable variability amongst and within NHS hospitals in the weight-bearing instructions given to patients with operatively managed ankle fractures. CONCLUSION: This study demonstrates community equipoise and suggests that the randomized study to determine the most effective strategy for postoperative weight-bearing in ankle fractures described in the NICE research recommendation is feasible

    A Roadmap for HEP Software and Computing R&D for the 2020s

    No full text
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade
    corecore