7 research outputs found

    The NorduGrid architecture and tools

    Full text link
    The NorduGrid project designed a Grid architecture with the primary goal to meet the requirements of production tasks of the LHC experiments. While it is meant to be a rather generic Grid system, it puts emphasis on batch processing suitable for problems encountered in High Energy Physics. The NorduGrid architecture implementation uses the \globus{} as the foundation for various components, developed by the project. While introducing new services, the NorduGrid does not modify the Globus tools, such that the two can eventually co-exist. The NorduGrid topology is decentralized, avoiding a single point of failure. The NorduGrid architecture is thus a light-weight, non-invasive and dynamic one, while robust and scalable, capable of meeting most challenging tasks of High Energy Physics.Comment: Talk from the 2003 Computing in High Energy Physics and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 9 pages,LaTeX, 4 figures. PSN MOAT00

    Atlas Data-Challenge 1 on NorduGrid

    Full text link
    The first LHC application ever to be executed in a computational Grid environment is the so-called ATLAS Data-Challenge 1, more specifically, the part assigned to the Scandinavian members of the ATLAS Collaboration. Taking advantage of the NorduGrid testbed and tools, physicists from Denmark, Norway and Sweden were able to participate in the overall exercise starting in July 2002 and continuing through the rest of 2002 and the first part of 2003 using solely the NorduGrid environment. This allowed to distribute input data over a wide area, and rely on the NorduGrid resource discovery mechanism to find an optimal cluster for job submission. During the whole Data-Challenge 1, more than 2 TB of input data was processed and more than 2.5 TB of output data was produced by more than 4750 Grid jobs.Comment: Talk from the 2003 Computing in High Energy Physics and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 7 pages, 3 ps figure

    Discovery potential for a charged Higgs boson decaying in the chargino-neutralino channel of the ATLAS detector at the LHC

    Get PDF
    We have investigated charged Higgs boson production via the gluon-bottom quark mode, gb -> tH+, followed by its decay into a chargino and a neutralino. The calculations are based on masses and couplings given by the Minimal Supersymmetric Standard Model (MSSM) for a specific choice of MSSM parameters. The signature of the signal is characterized by three hard leptons, a substantial missing transverse energy due to the decay of the neutralino and the chargino and three hard jets from the hadronic decay of the top quark. The possibility of detecting the signal over the Standard Model (SM) and non-SM backgrounds was studied for a set of tanBeta and mA. The existence of 5-sigma confidence level regions for H+ discovery at integrated luminosities of 100 fb-1 and 300 fb-1 is demonstrated, which cover also the intermediate region 4 < tanBeta < 10 where H+ decays to SM particles cannot be used for H+ discovery

    A step towards a computing grid for the LHC experiments : ATLAS data challenge 1

    No full text
    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity. It should be noted that it was not an option to "run the complete production at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organising and carrying out this large-scale production at a significant number of sites around the world had therefore to be faced. However, the benefits of this are manifold: apart from realising the required computing resources, this exercise created worldwide momentum for ATLAS computing as a whole. This report describes in detail the main steps carried out in DC1 and what has been learned form them as a step towards a computing Grid for the LHC experiments

    Grid-enabling Non-computer Resources

    Get PDF
    corecore