181,555 research outputs found

    ATLAS Data Challenge 1

    Full text link
    In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of the ATLAS Monte Carlo production system are described. We also present how the software suite was validated and the participating sites were certified. These productions were already partly performed by using different flavours of Grid middleware at ~ 20 sites.Comment: 10 pages; 3 figures; CHEP03 Conference, San Diego; Reference MOCT00

    Atlas Data-Challenge 1 on NorduGrid

    Full text link
    The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testbed and tools, physicists from Denmark, Norway and Sweden were able to participate in the overall exercise starting in July 2002 and continuing through the rest of 2002 and the first part of 2003 using solely the NorduGrid environment. This allowed to distribute input data over a wide area, and rely on the NorduGrid resource discovery mechanism to find an optimal cluster for job submission. During the whole Data-Challenge 1, more than 2 TB of input data was processed and more than 2.5 TB of output data was produced by more than 4750 Grid jobs.Comment: Talk from the 2003 Computing in High Energy Physics and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, 3 ps figure

    Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production

    Full text link
    For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and interoperable services. To enhance the ATLAS DC1 production toolkit, we introduced and tested a Virtual Data services component. For each major data transformation step identified in the ATLAS data processing pipeline (event generation, detector simulation, background pile-up and digitization, etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data transformation knowledge and the validated parameters settings that must be provided before the data transformation invocation. To provide for local-remote transparency during DC1 production, the VDC database server delivered in a controlled way both the validated production parameters and the templated production recipes for thousands of the event generation and detector simulation jobs around the world, simplifying the production management solutions.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, 3 figures, pdf. PSN TUCP01

    ATLAS upgrades for the next decades

    Full text link
    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010-2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred fb−1^{-1} to 3000 fb−1^{-1} by around 2035 for ATLAS and CMS. In parallel, the experiments need to be kept lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. This report summarizes various improvements to the ATLAS detector required to cope with the anticipated evolution of the LHC luminosity during this decade and the next

    Atlas of Anchorage Community Indicators

    Get PDF
    The Anchorage Community Indicators (ACI) project is designed to make information (extracted from data) accessible so that conversations about the health and well-being of Anchorage may become more completely informed. Policy makers, social commentators, service delivery systems, and scholars often stake out positions based on anecdotal evidence or hunches when, in many instances, solid, empirical evidence could be compiled to support or challenge these opinions.The Atlas of Anchorage Community Indicators makes empirical information about neighborhoods widely accessible to many different audiences. The initial selection of indicators for presentation in the Atlas was inspired by Peter Blau and his interest in measures of heterogeneity (diversity) and inequality and by the work of the Project on Human Development in Chicago Neighborhoods. In both cases the measures they developed were well-conceptualized and validated. The Atlas presents community indicators at the census block group level derived from data captured in the 2000 U.S. Census and the 2005 Anchorage Community Survey. All maps in the Atlas are overlaid by Community Council boundaries to facilitate comparisons across maps.Introduction / COMMUNITY COUNCIL BOUNDARY MAPS / Eagle River Community Councils / North Anchorage Community Councils / South Anchorage Community Councils / Girdwood Community Councils / CENSUS-DERIVES INDICATORS AT BLOCK GROUP LEVEL / 1. Concentrated Affluence / 2. Concentrated Disadvantage / 3. Housing Density / 4. Immigrant Concentration / 5. Index of Concentration at Extremes / 6. Industrial Heterogeneity / 7. Multiform Disadvantage / 8. Occupational Heterogeneity / 9. Population Density / 10. Racial Heterogeneity / 11. Ratio of Adults to Children / 12. Residential Stability / 13. Income Inequality // APPENDIX: ACI Technical Report: Initial Measures Derived from Censu

    A step towards a computing grid for the LHC experiments : ATLAS data challenge 1

    No full text
    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity. It should be noted that it was not an option to "run the complete production at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organising and carrying out this large-scale production at a significant number of sites around the world had therefore to be faced. However, the benefits of this are manifold: apart from realising the required computing resources, this exercise created worldwide momentum for ATLAS computing as a whole. This report describes in detail the main steps carried out in DC1 and what has been learned form them as a step towards a computing Grid for the LHC experiments

    The Topological Processor for the future ATLAS Level-1 Trigger: from design to commissioning

    Full text link
    The ATLAS detector at LHC will require a Trigger system to efficiently select events down to a manageable event storage rate of about 400 Hz. By 2015 the LHC instantaneous luminosity will be increased up to 3 x 10^34 cm-2s-1, this represents an unprecedented challenge faced by the ATLAS Trigger system. To cope with the higher event rate and efficiently select relevant events from a physics point of view, a new element will be included in the Level-1 Trigger scheme after 2015: the Topological Processor (L1Topo). The L1Topo system, currently developed at CERN, will consist initially of an ATCA crate and two L1Topo modules. A high density opto-electroconverter (AVAGO miniPOD) drives up to 1.6 Tb/s of data from the calorimeter and muon detectors into two high-end FPGA (Virtex7-690), to be processed in about 200 ns. The design has been optimized to guarantee excellent signal in- tegrity of the high-speed links and low latency data transmission on the Real Time Data Path (RTDP). The L1Topo receives data in a standalone protocol from the calorimeters and muon detectors to be processed into several VHDL topological algorithms. Those algorithms perform geometrical cuts, correlations and calculate complex observables such as the invariant mass. The output of such topological cuts is sent to the Central Trigger Processor. This talk focuses on the relevant high-density design characteristic of L1Topo, which allows several hundreds optical links to processed (up to 13 Gb/s each) using ordinary PCB material. Relevant test results performed on the L1Topo prototypes to characterize the high-speed links latency (eye diagram, bit error rate, margin analysis) and the logic resource utilization of the algorithms are discussed.Comment: 5 pages, 6 figure

    The Physics Analysis Tools project for the ATLAS experiment

    Get PDF
    The Large Hadron Collider is expected to start colliding proton beams in 2009. The enormous amount of data produced by the ATLAS experiment (≈1 PB per year) will be used in searches for the Higgs boson and Physics beyond the standard model. In order to meet this challenge, a suite of common Physics Analysis Tools (PAT) has been developed as part of the Physics Analysis software project. These tools run within the ATLAS software framework, ATHENA, covering a wide range of applications. There are tools responsible for event selection based on analysed data and detector quality information, tools responsible for specific physics analysis operations including data quality monitoring and physics validation, and complete analysis toolkits (frameworks) with the goal to aid the physicist to perform his analysis hiding the details of the ATHENA framework
    • 

    corecore