110 research outputs found

    ATLAS Great Lakes Tier-2 Computing and Muon Calibration Center Commissioning

    Get PDF
    Large-scale computing in ATLAS is based on a grid-linked system of tiered computing centers. The ATLAS Great Lakes Tier-2 came online in September 2006 and now is commissioning with full capacity to provide significant computing power and services to the USATLAS community. Our Tier-2 Center also host the Michigan Muon Calibration Center which is responsible for daily calibrations of the ATLAS Monitored Drift Tubes for ATLAS endcap muon system. During the first LHC beam period in 2008 and following ATLAS global cosmic ray data taking period, the Calibration Center received a large data stream from the muon detector to derive the drift tube timing offsets and time-to-space functions with a turn-around time of 24 hours. We will present the Calibration Center commissioning status and our plan for the first LHC beam collisions in 2009.Comment: To be published in the proceedings of DPF-2009, Detroit, MI, July 2009, eConf C09072

    Networking Areas of Interest for HEP (White Paper)

    Get PDF

    Networking Areas of Interest for HEP (Presentation)

    Get PDF

    Preparing for the next WLCG Network Data Challenge: Site Network Monitoring

    Get PDF
    During the first WLCG Network Data Challenge in fall of 2021 (DC21) we identified shortcomings in the monitoring that impeded our ability to fully understand the results collected during the data challenge. One of the simplest missing components was site-specific network information, especially information about traffic entering and leaving any of the participating sites. Without this information, it is very difficult to understand which sites are experiencing bottlenecks or might be misconfigured or under-used based on their capacity. The WLCG Monitoring Task Force, formed at the end of 2021, was tasked with three main work areas, one of which was site network monitoring. We will describe the work carried out by the task force to enhance our knowledge of network use for WLCG by enabling site network documentation and use, the status of the deployment, and the implications for the next data challenge

    The Ultralight project: the network as an integrated and managed resource for data-intensive science

    Get PDF
    Looks at the UltraLight project which treats the network interconnecting globally distributed data sets as a dynamic, configurable, and closely monitored resource to construct a next-generation system that can meet the high-energy physics community's data-processing, distribution, access, and analysis needs

    Analyzing, Identifying & Alerting on Network Issues

    Get PDF
    The Worldwide LHC Computing Grid (WLCG) relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, and traffic routing. In this paper, we will describe our ongoing work to proactively analyze, correlate and alert on various network and infrastructure issues. We will discuss the methods and techniques applied, the systems developed, and the challenges with the measurements that make it difficult to easily identify problems or assign those problems to the appropriate location(s)

    The Design and Demonstration of the Ultralight Testbed

    Get PDF
    In this paper we present the motivation, the design, and a recent demonstration of the UltraLight testbed at SC|05. The goal of the Ultralight testbed is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network- focused approach. UltraLight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. To achieve its goal we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we will first present early results in the various working areas of the project. We then describe our experiences of the network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many Grid computing sites

    The Motivation, Architecture and Demonstration of Ultralight Network Testbed

    Get PDF
    In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis

    New XrootD Monitoring Implementation

    Get PDF
    Complete and reliable monitoring of the WLCG data transfers is an important condition for effective computing operations of the LHC experiments. WLCG data challenges organized in 2021 and 2022 highlighted the need for improvements in WLCG data traffic monitoring. In particular, it concerns the implementation of remote data access monitoring via the root protocol. It includes data access to native XRootD storage, as well as to other storage solutions. We refer to it as XRootD monitoring. This contribution describes the new implementation of the XRootD monitoring flow, the overall architecture, the deployment scenario, and the integration with the WLCG global monitoring system
    • …
    corecore