56 research outputs found

    The next-generation ARC middleware

    Get PDF
    The Advanced Resource Connector (ARC) is a light-weight, non-intrusive, simple yet powerful Grid middleware capable of connecting highly heterogeneous computing and storage resources. ARC aims at providing general purpose, flexible, collaborative computing environments suitable for a range of uses, both in science and business. The server side offers the fundamental job execution management, information and data capabilities required for a Grid. Users are provided with an easy to install and use client which provides a basic toolbox for job- and data management. The KnowARC project developed the next-generation ARC middleware, implemented as Web Services with the aim of standard-compliant interoperability

    Development of a Plugin for Transporting Jobs Through a Grid

    Get PDF
    Sensorn Compact Muon Solenoid (CMS) vid partikelacceleratorn Large Hadron Collider (LHC) hos Europeiska Organisationen för Kärnforskning (CERN) kommer att producera stora mängder data som måste analyseras med hjälp av datorresurser över hela Europa. För att underlätta distribueringen av datat har man utvecklat en typ av beräkningsnät kallat ett grid. Helsingfors Fysikinstitut (HIP) har gått med på att aktivt bidra till projektet genom att skänka beräkningsresurser till gridet. Beräkningsresurserna består främst av beräkningselement (CE) och datalagringselement (SE). För att möjliggöra användningen av dessa resurser programmerades en plugin-modul för att CMS Remote Analysis Builder (CRAB) skall kunna skicka beräkningsjobb till beräkningsresurserna via Advanced Resource Connector (ARC) Grid Middleware. Detta slutarbete beskriver plugin-modulen och dess egenskaper samt de gridkomponenter som är relevanta för plugin-modulens realisering.The Compact Muon Solenoid (CMS) sensor at the particle accelerator called the Large Hadron Collider (LHC) in the European Organizaion for Nuclear Research (CERN) will produce large amounts of data that will require computing resources all over Europe for analysis. In order to facilitate the distribution of data a distributed network called a grid was developed. The Helsinki Institute for Physics (HIP) has pledged to actively participate in this project by supplying computing resources. The computational resources mainly consist of computing elements (CE) and storage elements (SE). To make the resources available, a plug-in was programmed to enable the CMS Remote Analysis Builder (CRAB) to use the Advanced Resource Connector (ARC) Grid Middleware for submitting jobs to these computational resources. This thesis describes the plug-in as well as the grid components which are relevant to implementation of the plug-in

    WS-ARC service configuration manual

    Get PDF
    The central component of AR

    Meta-brokering solution for establishing Grid Interoperability

    Get PDF

    ScotGrid: Providing an Effective Distributed Tier-2 in the LHC Era

    Get PDF
    ScotGrid is a distributed Tier-2 centre in the UK with sites in Durham, Edinburgh and Glasgow. ScotGrid has undergone a huge expansion in hardware in anticipation of the LHC and now provides more than 4MSI2K and 500TB to the LHC VOs. Scaling up to this level of provision has brought many challenges to the Tier-2 and we show in this paper how we have adopted new methods of organising the centres, from fabric management and monitoring to remote management of sites to management and operational procedures, to meet these challenges. We describe how we have coped with different operational models at the sites, where Glagsow and Durham sites are managed "in house" but resources at Edinburgh are managed as a central university resource. This required the adoption of a different fabric management model at Edinburgh and a special engagement with the cluster managers. Challenges arose from the different job models of local and grid submission that required special attention to resolve. We show how ScotGrid has successfully provided an infrastructure for ATLAS and LHCb Monte Carlo production. Special attention has been paid to ensuring that user analysis functions efficiently, which has required optimisation of local storage and networking to cope with the demands of user analysis. Finally, although these Tier-2 resources are pledged to the whole VO, we have established close links with our local physics user communities as being the best way to ensure that the Tier-2 functions effectively as a part of the LHC grid computing framework..Comment: Preprint for 17th International Conference on Computing in High Energy and Nuclear Physics, 7 pages, 1 figur

    Архітектура побудови віртуального оточення для грід-застосувань

    No full text
    У статті розглянуто проблематику конфігурації грід-ресурсів для виконання прикладних обчислювальних задач у грід-середовищі. Запропоновано технологію автоматизованого налаштування оточення для виконання завдань у грід-середовищі з використанням технології віртуалізації. Представлена архітектура побудови віртуального оточення для грід-застосувань.В статье рассмотрена проблематика конфигурирования грид-ресурсов для выполнения прикладных вычислительных задач в грид-среде. Предложена технология автоматизированной настройки окружения для выполнения задач в грид-среде с использованием технологии виртуализации. Представлена архитектура построения виртуального окружения для грид-применений.The article considers the problems of configuring of grid resources to perform computational tasks in grid environment. Moreover it is suggested the technology of automated configuration environ-ment to perform tasks in grid environment using virtualization technology. The architecture of a virtual environment for grid applications is introduced in the article
    corecore