31 research outputs found

    World-wide Networking for LHC Data Processing

    Get PDF
    CERN’s Large Hadron Collider is producing several Petabytes of physics data per year. We present the network environment used for LHC data processing, and provide outlook into evolution of computing models and networks supporting them

    World-wide Networking for LHC Data Processing

    Get PDF
    CERN’s Large Hadron Collider is producing several Petabytes of physics data per year. We present the network environment used for LHC data processing, and provide outlook into evolution of computing models and networks supporting them

    US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community

    Get PDF
    US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, where US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans

    The Dynamics of Network Topology

    Get PDF
    Network monitoring is vital to ensure proper network operation over time, and is tightly integrated with all the data intensive processing tasks used by the LHC experiments. In order to build a coherent set of network management services it is very important to collect in near real-time information about the network topology, the main data flows, traffic volume and the quality of connectivity. A set of dedicated modules were developed in the MonALISA framework to periodically perform network measurements tests between all sites. We developed global services to present in near real-time the entire network topology used by a community. For any LHC experiment such a network topology includes several hundred of routers and tens of Autonomous Systems. Any changes in the global topology are recorded and this information is can be easily correlated with traffic patterns. The evolution in time of global network topology is shown a dedicated GUI. Changes in the global topology at this level occur quite frequently and even small modifications in the connectivity map may significantly affect the network performance. The global topology graphs are correlated with active end to end network performance measurements, done with the Fast Data Transfer application, between all sites. Access to both real-time and historical data, as provided by MonALISA, is also important for developing services able to predict the usage pattern, to aid in efficiently allocating resources globally

    Zanieczyszczenie gleby cynkiem, kadmem i ołowiem na terenie Zabrza

    Get PDF
    Heavy metal concentrations were evaluated in topsoil (0–10 cm) in the city of Zabrze. Soil samples were taken from 71 sites distributed evenly throughout the city, in the vicinity of emitters, roads, residential areas and parks, representing various biotopes – mainly green belts, squares, fields, brownfields, lawns, forests and meadows. Average Zn concentrations ranged from 31.7 mg . kg–1 (meadows) to 2057.1 mg . kg–1 (brownfields). The highest Cd concentrations were also found in brownfields. The average Cd concentrations ranged from 0.15 up to 13.1 mg . kg–1. The Pb concentrations ranged from 31.5 to 520 mg . kg–1 and were the lowest in meadows. The highest heavy metal pollutions were in soil samples collected in the vicinity of roads and industrial plants. Results indicate the necessity of soil pollution mapping in cities, especially for proper human risk assessment and for the prevention of further pollution spread

    Named Data Networking in Climate Research and HEP Applications

    Get PDF
    The Computing Models of the LHC experiments continue to evolve from the simple hierarchical MONARC[2] model towards more agile models where data is exchanged among many Tier2 and Tier3 sites, relying on both large scale file transfers with strategic data placement, and an increased use of remote access to object collections with caching through CMS's AAA, ATLAS' FAX and ALICE's AliEn projects, for example. The challenges presented by expanding needs for CPU, storage and network capacity as well as rapid handling of large datasets of file and object collections have pointed the way towards future more agile pervasive models that make best use of highly distributed heterogeneous resources. In this paper, we explore the use of Named Data Networking (NDN), a new Internet architecture focusing on content rather than the location of the data collections. As NDN has shown considerable promise in another data intensive field, Climate Science, we discuss the similarities and differences between the Climate and HEP use cases, along with specific issues HEP faces and will face during LHC Run2 and beyond, which NDN could address

    CT enteroclysis and CT enterography — new approaches to assessing pathology of the small intestine

    Get PDF
    Enteroklyza i enterografia tomografii komputerowej są nowoczesnymi metodami diagnostycznymi pozwalającymi na dokładną ocenę patologii ściany jelita cienkiego z jednoczesną oceną zmian pozajelitowych i pełną oceną pozostałych narządów jamy brzusznej oraz miednicy mniejszej. Podstawowa różnica w metodyce badania między enterografią a enteroklyzą polega na odmiennym sposobie podania kontrastu, w przypadku enteroklyzy odbywa się to przez sondę założoną do pętli jelita cienkiego, a w czasie enterografii kontrast podawany jest doustnie. Rolą enteroklyzy/enterografii jest rozpoznanie chorób zapalnych jelita cienkiego, dalsze monitorowanie aktywności tych chorób oraz ocena ich powikłań, ocena jelita cienkiego w przypadku podejrzenia choroby nowotworowej tego fragmentu przewodu pokarmowego, a także identyfikacja źródła krwawienia z jelita cienkiego. Kluczową kwestią dla poprawnej interpretacji wykonanego badania jest należyte przygotowanie pacjenta — wypełnienie pętli jelitowych roztworem negatywnego środka kontrastowego oraz dobór odpowiedniej techniki badania w zależności od danych zawartych w skierowaniu. Niezmiennie ważną role odgrywa doświadczenie radiologa interpretującego badanie. Pełna ocena badania obejmuje ocenę toposcanu, obrazów poprzecznych, traktowanych zawsze jako obrazy referencyjne, oraz ocenę rekonstrukcji wielopłaszczyznowych i 3D. Szczególnie enterografia w opcji tomografii komputerowej jest bezpieczną i dobrze tolerowaną przez pacjentów metodą diagnostyczną pozwalającą na zdiagnozowanie oraz monitorowanie chorób zapalnych, nowotworów i malformacji naczyniowych jelita cienkiego.CT enteroclysis and CT enterography are modern diagnostic methods that allow a detailed assessment of the small intestine wall combined with an evaluation of extraintestinal lesions and a full examination of the remaining organs in the abdominal cavity and the pelvis. A major difference in examination methodology between enterography and enteroclysis is the different way in which a contrast medium is administered; in enteroclysis this is done through a catheter inserted into the small intestine, whereas in enterography the contrast medium is given orally. The purpose of enteroclysis/enterography is to identify inflammatory diseases of the small intestine, follow up on the progression of such diseases and assess their complications, evaluate the small intestine if this part of the gastrointestinal tract is supposed to be affected by a cancerous growth, as well as find the source of bleeding from the small intestine. A key issue for the correct interpretation that should follow the examination is to prepare the patient properly: fill intestinal loops with negative contrast medium solution and choose the right examination technique depending on the information contained in the referral. An invariably crucial role is played by the experience of the radiologist responsible for the interpretation of the examination. A full evaluation of the examination involves assessing a topographic scan, axial images, which are always treated as reference, and multiplanar and 3D reconstructions. In particular, CT enterography is a safe and well-tolerated diagnostic method that allows the diagnosing and monitoring of inflammatory diseases, neoplasia and vascular malformations of the small intestine

    High Performance Gigabit Ethernet Switches for DAQ Systems

    No full text
    Commercially available high performance Gigabit Ethernet (GbE) switches are optimized mostly for Internet and standard LAN application traffic. DAQ systems on the other hand usually make use of very specific traffic patterns, with e.g. deterministic arrival times. Industry's accepted loss-less limit of 99.999% may be still unacceptably high for DAQ purposes, as e.g. in the case of the LHCb readout system. In addition, even switches passing this criteria under random traffic can show significantly higher loss rates if subject to our traffic pattern, mainly due to buffer memory limitations. We have evaluated the performance of several switches, ranging from "pizza-box" devices with 24 or 48 ports up to chassis based core switches in a test-bed capable to emulate realistic traffic patterns as expected in the readout system of our experiment. The results obtained in our tests have been used to refine and parametrize our packet level simulation of the complete LHCb readout network. In this paper we report on the results of our tests, and present the outcome of the simulation using realistic switch models
    corecore