13 research outputs found

    ATLAS Trigger and Data Acquisition Upgrades for the High Luminosity LHC

    No full text
    The ATLAS experiment at CERN is constructing upgraded system for the "High Luminosity LHC", with collisions due to start in 2029. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 4.6 TB/s input, and an Event Filter running at 1 MHz which combines offline-like algorithms on a large commodity compute service with the potential to be augmented by commercial accelerators . Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and accelerated track reconstruction are designed to combat pileup in the Trigger and Event Filter respectively. This contribution will report recent progress on the design, technology and construction of the system. The physics motivation and expected performance will be shown for key physics processes

    ATLAS Trigger and Data Acquisition Upgrades for the High-Luminosity LHC

    No full text
    The ATLAS experiment at CERN is constructing upgraded system for the "High Luminosity LHC", with collisions due to start in 2029. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 4.6 TB/s input, and an Event Filter running at 1 MHz which combines offline-like algorithms on a large commodity compute service with the potential to be augmented by commercial accelerators . Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and accelerated track reconstruction are designed to combat pileup in the Trigger and Event Filter respectively. This contribution will report recent progress on the design, technology and construction of the system. The physics motivation and expected performance will be shown for key physics processes

    ATLAS Trigger and Data Acquisition Upgrades for the High-Luminosity LHC

    No full text
    The ATLAS experiment at CERN is constructing an upgraded system for the ``High-Luminosity LHC'' (HL-LHC), with collisions due to start in 2029. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5e-34 cm^-2 s^-1, resulting in much higher pile-up and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the Trigger and Data Acquisition (TDAQ) system. The design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition, which combines custom readout with commodity hardware and networking to deal with 4.6 TB/s input, and an Event Filter running at 1 MHz, which combines offline-like algorithms on a large commodity compute service with the potential to be augmented by commercial accelerators. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and accelerated track reconstruction are designed to combat pile-up in the Trigger and Event Filter, respectively. An overview of the planned phase II TDAQ system is provided, followed by a more detailed description of recent progress on the design, technology and construction of the system

    ATLAS Trigger and Data Acquisition Upgrades for the High Luminosity LHC

    No full text
    The ATLAS experiment at CERN has started the construction of upgrades for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5x1034 cm-2s-1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realize the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The approved baseline design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 5.2 TB/s input, and an Event Filter running at 1 MHz which combines offline-like algorithms on a large commodity compute service augmented by hardware tracking. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and track reconstruction with Associative Memory ASICs and FPGAs are designed to combat pileup in the Trigger and Event Filter respectively. This paper will report recent progress on the design, technology and construction of the system. The physics motivation and expected performance will be shown for key physics processes

    ATLAS Trigger and Data Acquisition Upgrades for the High Luminosity LHC

    No full text
    The ATLAS experiment at CERN has started the construction of upgrades for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5 x 10^{34} cm^{-2}s^{-1}, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The approved baseline design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 5.2 TB/s input, and an Event Filter running at 1 MHz which combines offline-like algorithms on a large commodity compute service augmented by hardware tracking. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and track reconstruction with Associative Memory ASICs and FPGAs are designed to combat pileup in the Trigger and Event Filter respectively. This paper will report recent progress on the design, technology and construction of the system. The physics motivation and expected performance will be shown for key physics processes

    ATLAS Trigger and Data Acquisition Upgrades for the High Luminosity LHC

    No full text
    The ATLAS experiment at CERN has started the construction of upgrades for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5×10347.5 \times 10^{34} cm2s1\text{cm}^{-2}\text{s}^{-1}, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The approved baseline design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 5.2 TB/s input, and an Event Filter running at 1 MHz which combines offline-like algorithms on a large commodity compute service augmented by hardware tracking. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and track reconstruction with Associative Memory ASICs and FPGAs are designed to combat pileup in the Trigger and Event Filter respectively. This poster will report recent progress on the design, technology and construction of the system. The physics motivation and expected performance will be shown for key physics processes

    ATLAS Trigger and Data Acquisition upgrades for the High Luminosity LHC

    No full text
    The ATLAS experiment at CERN is constructing an upgraded system for the “High Luminosity LHC”, with collisions due to start in 2029. In order to deliver an order of magnitude more data than previous LHC runs, 7 TeV protons will collide with an instantaneous luminosity of up to 7.5 x 10e^34 cm^-2s^-1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics program, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The design of the Trigger and Data Acquisition (TDAQ) system upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz input rate, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 4.6 TB/s input, and an Event Filter running at 1 MHz input rate which combines offline-like algorithms on a large commodity compute service with the potential to be augmented by commercial accelerators. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and accelerated track reconstruction are designed to combat pileup in the Trigger and Event Filter respectively. This document reports the recent progress on the design, technology and construction of the system

    ATLAS Trigger and Data Acquisition Upgrades for the High Luminosity LHC

    No full text
    The ATLAS experiment at CERN is planning a second phase of upgrades to prepare for the "High Luminosity LHC", with collisions due to start in 2026. In order to deliver an order of magnitude more data than previous runs, 14 TeV protons will collide with an instantaneous luminosity of 7.5 × 1034 cm-2s-1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it is a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. With the Technical Design Report written and construction due to start soon, the baseline design of the TDAQ upgrade will be described. The system comprises: a hardware-based low-latency real-time Trigger, Data Acquisition which combines custom readout with commodity hardware and networking, and an Event Filter which combines offline-like algorithms on a large commodity compute service augmented by fast hardware tracking. Throughout the system, use of precision algorithms running on FPGAs or commodity hardware are pushed to lower latencies and higher rates than before. Precision calorimeter reconstruction with offline-style clustering and jet-finding in FPGAs, and track reconstruction in Associative Memory and FPGAs are used to combat pileup in the Trigger and Event Filter respectively. The physics motivation and expected performance will be shown for key physics processes

    ATLAS Trigger and Data Acquisition Upgrades for the High Luminosity LHC

    No full text
    The ATLAS experiment at CERN is constructing upgraded system for the "High Luminosity LHC", with collisions due to start in 2029. In order to deliver an order of magnitude more data than previous LHC runs, 14 TeV protons will collide with an instantaneous luminosity of up to 7.5 x 10e34 cm^-2s^-1, resulting in much higher pileup and data rates than the current experiment was designed to handle. While this is essential to realise the physics programme, it presents a huge challenge for the detector, trigger, data acquisition and computing. The detector upgrades themselves also present new requirements and opportunities for the trigger and data acquisition system. The design of the TDAQ upgrade comprises: a hardware-based low-latency real-time Trigger operating at 40 MHz, Data Acquisition which combines custom readout with commodity hardware and networking to deal with 4.6 TB/s input, and an Event Filter running at 1 MHz which combines offline-like algorithms on a large commodity compute service with the potential to be augmented by commercial accelerators. Commodity servers and networks are used as far as possible, with custom ATCA boards, high speed links and powerful FPGAs deployed in the low-latency parts of the system. Offline-style clustering and jet-finding in FPGAs, and accelerated track reconstruction are designed to combat pileup in the Trigger and Event Filter respectively. This contribution will report recent progress on the design, technology and construction of the system. The physics motivation and expected performance will be shown for key physics processes
    corecore