9 research outputs found

    DIANE - Distributed analysis environment for GRID-enabled simulation and analysis of physics data

    No full text
    Distributed ANalysis Environment (DIANE) is the result of R&D in CERN IT Division focused on interfacing semiinteractive parallel applications with distributed GRID technology. DIANE provides a master-worker workflow management layer above low-level GRID services. DIANE is application and language-neutral. Component- container architecture and component adapters provide flexibility necessary to fulfill the diverse requirements of distributed applications. Physical Transport Layer assures interoperability with existing middleware frameworks based on Web Services. Several distributed simulations based on Geant 4 were deployed and tested in real-life scenarios with DIANE

    Prototyping a file sharing and synchronization service with Owncloud

    No full text
    We present the summary of technical evaluation of the Owncloud product as a technology to implement the CERNBOX service – an open source alternative to Dropbox use at CERN. In this paper we highlight the strengths of Owncloud and we also identify core issues which needs to be addressed for large-scale deployment at CERN

    Dynamic workload balancing of parallel applications with user-level scheduling on the Grid

    No full text
    This paper suggests a hybrid resource management approach for efficient parallel distributed computing on the Grid. It operates on both application and system levels, combining user-level job scheduling with dynamic workload balancing algorithm that automatically adapts a parallel application to the heterogeneous resources, based on the actual resource parameters and estimated requirements of the application. The hybrid environment and the algorithm for automated load balancing are described, the influence of resource heterogeneity level is measured, and the speedup achieved with this technique is demonstrated for different types of applications and resources

    Toward a petabyte-scale AFS service at CERN

    No full text
    AFS is a mature and reliable storage service at CERN, having worked for more than 20 years as the provider of Unix home directories and project areas. Recently, the AFS service has grown at unprecedented rates (200% in the past year), this growth was unlocked thanks to innovations in both the hardware and software components of our file servers. This work presents how AFS is used at CERN and how the service offering is evolving with the increasing storage needs of its local and remote user communities. In particular, we demonstrate the usage patterns for home directories, workspaces and project spaces, as well as show the daily work which is required to rebalance data and maintaining stability and performance. Finally, we highlight some recent changes and optimisations made to the AFS Service, thereby revealing how AFS can possibly operate at all while being subjected to frequent–almost DDOS-like–attacks from its users

    Biomedical applications on the GRID: efficient management of parallel jobs

    No full text
    Distributed computing based on the Master-Worker and PULL interaction model is applicable to a number of applications in high energy physics, medical physics and bio-informatics. We demonstrate a realistic medical physics use-case of a dosimetric system for brachytherapy using distributed Grid resources. We present the efficient techniques for running parallel jobs in a case of the BLAST, a gene sequencing application, as well as for the Monte Carlo simulation based on Geant4. We present a strategy for improving the runtime performance and robustness of the jobs as well as for the minimization of the development time needed to migrate the applications to a distributed environment

    LCG POOL development status and production experience

    No full text
    The POOL project, as a part of the LHC Computing Grid (LCG), is now entering its third year of active development POOL provides the baseline persistency framework for three LHC experiment and is based on a strict component model, insulating experiment software from a variety of storage technology choices. This paper gives a brief overview of the POOL architecture, its main design principles and the experience gained with integration into LHC experiment frameworks. In also presents recent developments in the area of relational database abstraction and object storage into RDBMS systems
    corecore