1,728 research outputs found

    The AliEn system, status and perspectives

    Full text link
    AliEn is a production environment that implements several components of the Grid paradigm needed to simulate, reconstruct and analyse HEP data in a distributed way. The system is built around Open Source components, uses the Web Services model and standard network protocols to implement the computing platform that is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The aim of this paper is to present the current AliEn architecture and outline its future developments in the light of emerging standards.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, Word, 10 figures. PSN MOAT00

    AliEnFS - a Linux File System for the AliEn Grid Services

    Full text link
    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. alienfsalienfs (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual File System Switch) to communicate via a generalised file system interface to the AliEn file system daemon. The AliEn framework is used for authentication, catalogue browsing, file registration and read/write transfer operations. A C++ API implements the generic file system operations. The goal of AliEnFS is to allow users easy interactive access to a worldwide distributed virtual file system using familiar shell commands (f.e. cp,ls,rm ...) The paper discusses general aspects of Grid File Systems, the AliEn implementation and present and future developments for the AliEn Grid File System.Comment: 9 pages, 12 figure

    CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment

    Full text link
    In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the user data field of cloud APIs, when running CernVM on the cloud, or by using CernVM web interface when running the VM locally. CernVM Online is a publicly accessible web interface that unifies these two procedures. A user is able to define, store and share CernVM contexts using CernVM Online and then apply them either in a cloud by using CernVM Cloud Gateway or on a local VM with the single-step pairing mechanism. CernVM Cloud Gateway is a distributed system that provides a single interface to use multiple and different clouds (by location or type, private or public). Cloud gateway has been so far integrated with OpenNebula, CloudStack and EC2 tools interfaces. A user, with access to a number of clouds, can run CernVM cloud agents that will communicate with these clouds using their interfaces, and then use one single interface to deploy and scale CernVM clusters. CernVM clusters are defined in CernVM Online and consist of a set of CernVM instances that are contextualized and can communicate with each other.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    Micro-CernVM: Slashing the Cost of Building and Deploying Virtual Machines

    Full text link
    The traditional virtual machine building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.Comment: Conference paper at the 2013 Computing in High Energy Physics (CHEP) Conference, Amsterda

    AliEn - EDG Interoperability in ALICE

    Full text link
    AliEn (ALICE Environment) is a GRID-like system for large scale job submission and distributed data management developed and used in the context of ALICE, the CERN LHC heavy-ion experiment. With the aim of exploiting upcoming Grid resources to run AliEn-managed jobs and store the produced data, the problem of AliEn-EDG interoperability was addressed and an in-terface was designed. One or more EDG (European Data Grid) User Interface machines run the AliEn software suite (Cluster Monitor, Storage Element and Computing Element), and act as interface nodes between the systems. An EDG Resource Broker is seen by the AliEn server as a single Computing Element, while the EDG storage is seen by AliEn as a single, large Storage Element; files produced in EDG sites are registered in both the EDG Replica Catalogue and in the AliEn Data Catalogue, thus ensuring accessibility from both worlds. In fact, both registrations are required: the AliEn one is used for the data management, the EDG one to guarantee the integrity and access to EDG produced data. A prototype interface has been successfully deployed using the ALICE AliEn Server and the EDG and DataTAG Testbeds.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003,4 pages, PDF, 2 figures. PSN TUCP00

    ALFA: A new Framework for ALICE and FAIR experiments

    Get PDF

    ä»Žă€ŠçŸłć·ąć››ç§ă€‹çœ‹é˜źć€§é“–çš„æˆć‰§ç‰čç‚č

    Get PDF
    AliEn (http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system

    Charged Particle Production in Proton-, Deuteron-, Oxygen- and Sulphur-Nucleus Collisions at 200 GeV per Nucleon

    Get PDF
    The transverse momentum and rapidity distributions of net protons and negatively charged hadrons have been measured for minimum bias proton-nucleus and deuteron-gold interactions, as well as central oxygen-gold and sulphur-nucleus collisions at 200 GeV per nucleon. The rapidity density of net protons at midrapidity in central nucleus-nucleus collisions increases both with target mass for sulphur projectiles and with the projectile mass for a gold target. The shape of the rapidity distributions of net protons forward of midrapidity for d+Au and central S+Au collisions is similar. The average rapidity loss is larger than 2 units of rapidity for reactions with the gold target. The transverse momentum spectra of net protons for all reactions can be described by a thermal distribution with `temperatures' between 145 +- 11 MeV (p+S interactions) and 244 +- 43 MeV (central S+Au collisions). The multiplicity of negatively charged hadrons increases with the mass of the colliding system. The shape of the transverse momentum spectra of negatively charged hadrons changes from minimum bias p+p and p+S interactions to p+Au and central nucleus-nucleus collisions. The mean transverse momentum is almost constant in the vicinity of midrapidity and shows little variation with the target and projectile masses. The average number of produced negatively charged hadrons per participant baryon increases slightly from p+p, p+A to central S+S,Ag collisions.Comment: 47 pages, submitted to Z. Phys.

    Offloading electromagnetic shower transport to GPUs

    Full text link
    Making general particle transport simulation for high-energy physics (HEP) single-instruction-multiple-thread (SIMT) friendly, to take advantage of accelerator hardware, is an important alternative for boosting the throughput of simulation applications. To date, this challenge is not yet resolved, due to difficulties in mapping the complexity of Geant4 components and workflow to the massive parallelism features exposed by graphics processing units (GPU). The AdePT project is one of the R\&D initiatives tackling this limitation and exploring GPUs as potential accelerators for offloading some part of the CPU simulation workload. Our main target is to implement a complete electromagnetic shower demonstrator working on the GPU. The project is the first to create a full prototype of a realistic electron, positron, and gamma electromagnetic shower simulation on GPU, implemented as either a standalone application or as an extension of the standard Geant4 CPU workflow. Our prototype currently provides a platform to explore many optimisations and different approaches. We present the most recent results and initial conclusions of our work, using both a standalone GPU performance analysis and a first implementation of a hybrid workflow based on Geant4 on the CPU and AdePT on the GPU.Comment: 8 pages, 4 figures, 20th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2021), to be published in Journal of Physics: Conference Series, editor Andrei Gheat
    • 

    corecore