7 research outputs found
US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community
US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, where US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans
Recommended from our members
US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community
US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, where US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans
Disk-to-Disk network transfers at 100 Gb/s
A 100 Gbps network was established between the California Institute of Technology conference booth at the Super Computing 2011 conference in Seattle, Washington and the computing center at the University of Victoria in Canada. A circuit was established over the BCNET, CANARIE and Super Computing (SCInet) networks using dedicated equipment. The small set of servers at the endpoints used a combination of 10GE and 40GE technologies, and SSD drives for data storage. The configuration of the network and the server configuration are discussed. We will show that the system was able to achieve disk-to-disk transfer rates of 60 Gbps and memory-to-memory rates in excess of 180 Gbps across the WAN. We will discuss the transfer tools, disk configurations, and monitoring tools used in the demonstration
BioMAX the first macromolecular crystallography beamline at MAX IV Laboratory
BioMAX is the first macromolecular crystallography beamline at the MAX IV Laboratory 3 GeV storage ring, which is the first operational multi-bend achromat storage ring. Due to the low-emittance storage ring, BioMAX has a parallel, high-intensity X-ray beam, even when focused down to 20 μm × 5 μm using the bendable focusing mirrors. The beam is tunable in the energy range 5-25 keV using the in-vacuum undulator and the horizontally deflecting double-crystal monochromator. BioMAX is equipped with an MD3 diffractometer, an ISARA high-capacity sample changer and an EIGER 16M hybrid pixel detector. Data collection at BioMAX is controlled using the newly developed MXCuBE3 graphical user interface, and sample tracking is handled by ISPyB. The computing infrastructure includes data storage and processing both at MAX IV and the Lund University supercomputing center LUNARC. With state-of-the-art instrumentation, a high degree of automation, a user-friendly control system interface and remote operation, BioMAX provides an excellent facility for most macromolecular crystallography experiments. Serial crystallography using either a high-viscosity extruder injector or the MD3 as a fixed-target scanner is already implemented. The serial crystallography activities at MAX IV Laboratory will be further developed at the microfocus beamline MicroMAX, when it comes into operation in 2022. MicroMAX will have a 1 μm × 1 μm beam focus and a flux up to 1015 photons s with main applications in serial crystallography, room-temperature structure determinations and time-resolved experiments