86 research outputs found

    3rd EGEE User Forum

    Get PDF
    We have organized this book in a sequence of chapters, each chapter associated with an application or technical theme introduced by an overview of the contents, and a summary of the main conclusions coming from the Forum for the chapter topic. The first chapter gathers all the plenary session keynote addresses, and following this there is a sequence of chapters covering the application flavoured sessions. These are followed by chapters with the flavour of Computer Science and Grid Technology. The final chapter covers the important number of practical demonstrations and posters exhibited at the Forum. Much of the work presented has a direct link to specific areas of Science, and so we have created a Science Index, presented below. In addition, at the end of this book, we provide a complete list of the institutes and countries involved in the User Forum

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    Platform as a service integration for scientific computing using DIRAC

    Get PDF
    Cada día crece máis a demanda de recursos de computación requirida polos investigadores, capacidades de cálculo que coexisten co crecente volume de datos xerado actualmente. Estes investigadores están a demandar un servizo de Computación de Altas Prestacións (HPC) que permita a execución das suas simulacións dunha forma na que se deslocalicen os recursos para poder acceder aos máximos posibles, facilitandoo coa forma o máis cómoda e segura para eles. Doutra banda, as universidades están conectadas con centros de investigación con redes que pusuen unha velocidade e fiabilidade que posibilitan a execución de traballos de cálculo científico. As capacidades de computo existentes en universidades van dende aulas informáticas para usos docentes, laboratorios, etc., ata clusters de ordenadores pertencentes a grupos de investigación. Usando tecnoloxías grid e cloud estes recursos computacionais heteroxéneos poderían ser reutilizados polos investigadores para realizar simulacións, aportando unha maior cantidade de cómputo a xa existente e deslocalizando os recursos entre distintos lugares ao redor do planeta. O obxectivo desta tese é adaptar a contorna para computación distribuída DIRAC, desenvolvida para o proxecto LHCb do CERN, para o seu uso por varias comunidades de usuarios baseado nas tecnoloxías cloud e big data. Esta contorna pusuiría repositorios de software centralizados que permitan proveer o software necesario para que a través dos entornos na nube se poidan executar as aplicacións dos investigadores en calquera parte do planeta dunha forma escalable, permitindo aprobeitar tanto recursos dedicados como nondedicados. Avaliando así a execución desta plataforma para a realización de cálculos científicos. Este traballo comezará coa obtención de requisitos, para pasar despois ao proceso de integración básica. Posteriormente, optimizarase o uso do software cientifico empregado para as contornas cloud, tratando de adaptalo aos entornos virtualizados. Para iso, será necesario realizar un estudo estadístico que sexa o máis próximo posible aos entornos en producción para poder determinar e crear as infraestructuras adaptadas evitando así a perda de rendemento dentro de recursos. O seguinte caso sería utilizar as tecnoloxías virtualizadas, adaptando as arquitecturas creadas, para a creación de sistemas que permitan o envío de traballos que requiran de grandes cantidades de datos no eido do big data dunha forma distribuida

    Machine learning at the energy and intensity frontiers of particle physics

    Get PDF
    Our knowledge of the fundamental particles of nature and their interactions is summarized by the standard model of particle physics. Advancing our understanding in this field has required experiments that operate at ever higher energies and intensities, which produce extremely large and information-rich data samples. The use of machine-learning techniques is revolutionizing how we interpret these data samples, greatly increasing the discovery potential of present and future experiments. Here we summarize the challenges and opportunities that come with the use of machine learning at the frontiers of particle physics

    Enabling parallel and interactive distributed computing data analysis for the ALICE experiment

    Get PDF
    AliEn (ALICE Environment) is the production environment developed by the ALICE collaboration at CERN. It provides a set of Grid tools enabling the full offline computational work-flow of the experiment (simulation, reconstruction and data analysis) in a distributed and heterogeneous computing environment. In addition to the analysis on the Grid, ALICE users perform local interactive analysis using ROOT and the Parallel ROOT Facility (PROOF). PROOF enables physicists to analyse in parallel medium-sized (200-300 TB) data sets in a short time scale. The default installation of PROOF is on a static dedicated cluster, typically 200-300 cores. This well-proven approach is not devoid of limitations, more specifically for analysis of larger datasets or when the installation of a dedicated cluster is not possible. Using a new framework called Proof on Demand (PoD), PROOF can be used directly on Grid-enabled clusters, by dynamically assigning interactive nodes on user request. This thesis presents the PoD on AliEn project. The integration of Proof on Demand in the AliEn framework provides private dynamic PROOF clusters as a Grid service. This functionality is transparent to the user who will submit interactive jobs to the AliEn system. The ROOT framework, among other things, is used by physicists to carry out the Monte Carlo Simulation of the detector. The engineers working on the mechanical design of the detector need to collaborate with the physicists. However, the softwares used by the engineers are not compatible with ROOT. This thesis describes a second result obtained during this PhD project: the implementation of the TGeoCad Interface that allows the conversion of ROOT geometries to STEP format, compatible with CAD systems. The interface provides an important communication and collaboration tool between physicists and engineers, dealing with the simulation and the design of the detector geometry

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    A Globally Distributed System for Job, Data, and Information Handling for High Energy Physics

    Full text link

    Optimisation of LHCb Applications for Multi- and Manycore Job Submission

    Get PDF
    Nowadays, the Worldwide LHC Computing Grid mainly consists of multi- and manycore processors. The thesis investigates how such resources can be used more efficiently at the example of the LHCb experiment. It analyses how to improve software in terms of memory requirements and concurrency. The research involves the implementation of a moldable job scheduler and a supervised learning algorithm which helps to better predict LHCb workloads

    Monitoring and Optimization of ATLAS Tier 2 Center GoeGrid

    Get PDF
    The demand on computational and storage resources is growing along with the amount of infor- mation that needs to be processed and preserved. In order to ease the provisioning of the digital services to the growing number of consumers, more and more distributed computing systems and platforms are actively developed and employed. The building block of the distributed computing infrastructure are single computing centers, similar to the Worldwide LHC Computing Grid, Tier 2 centre GoeGrid. The main motivation of this thesis was the optimization of GoeGrid perfor- mance by efficient monitoring. The goal has been achieved by means of the GoeGrid monitoring information analysis. The data analysis approach was based on the adaptive-network-based fuzzy inference system (ANFIS) and machine learning algorithm such as Linear Support Vector Machine (SVM). The main object of the research was the digital service, since availability, reliability and ser- viceability of the computing platform can be measured according to the constant and stable provisioning of the services. Due to the widely used concept of the service oriented architecture (SOA) for large computing facilities, in advance knowing of the service state as well as the quick and accurate detection of its disability allows to perform the proactive management of the com- puting facility. The proactive management is considered as a core component of the computing facility management automation concept, such as Autonomic Computing. Thus in time as well as in advance and accurate identification of the provided service status can be considered as a contribution to the computing facility management automation, which is directly related to the provisioning of the stable and reliable computing resources. Based on the case studies, performed using the GoeGrid monitoring data, consideration of the approaches as generalized methods for the accurate and fast identification and prediction of the service status is reasonable. Simplicity and low consumption of the computing resources allow to consider the methods in the scope of the Autonomic Computing component
    corecore