5,823 research outputs found

    Designing Computing System Architecture and Models for the HL-LHC era

    Full text link
    This paper describes a programme to study the computing model in CMS after the next long shutdown near the end of the decade.Comment: Submitted to proceedings of the 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015), Okinawa, Japa

    Any Data, Any Time, Anywhere: Global Data Access for Science

    Full text link
    Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data access becomes quite difficult when many independent storage sites are involved because users are burdened with learning the intricacies of accessing each system and keeping careful track of data location. We present an alternate approach: the Any Data, Any Time, Anywhere infrastructure. Combining several existing software products, AAA presents a global, unified view of storage systems - a "data federation," a global filesystem for software delivery, and a workflow management system. We present how one HEP experiment, the Compact Muon Solenoid (CMS), is utilizing the AAA infrastructure and some simple performance metrics.Comment: 9 pages, 6 figures, submitted to 2nd IEEE/ACM International Symposium on Big Data Computing (BDC) 201

    RenderCore -- a new WebGPU-based rendering engine for ROOT-EVE

    Full text link
    ROOT-Eve (REve), the new generation of the ROOT event-display module, uses a web server-client model to guarantee exact data translation from the experiments' data analysis frameworks to users' browsers. Data is then displayed in various views, including high-precision 2D and 3D graphics views, currently driven by THREE.js rendering engine based on WebGL technology. RenderCore, a computer graphics research-oriented rendering engine, has been integrated into REve to optimize rendering performance and enable the use of state-of-the-art techniques for object highlighting and object selection. It also allowed for the implementation of optimized instanced rendering through the usage of custom shaders and rendering pipeline modifications. To further the impact of this investment and ensure the long-term viability of REve, RenderCore is being refactored on top of WebGPU, the next-generation GPU interface for browsers that supports compute shaders, storage textures and introduces significant improvements in GPU utilization. This has led to optimization of interchange data formats, decreased server-client traffic, and improved offloading of data visualization algorithms to the GPU. FireworksWeb, a physics analysis-oriented event display of the CMS experiment, is used to demonstrate the results, focusing on high-granularity calorimeters and targeting high data-volume events of heavy-ion collisions and High-Luminosity LHC. The next steps and directions are also discussed

    Moving the California distributed CMS xcache from bare metal into containers using Kubernetes

    Get PDF
    The University of California system has excellent networking between all of its campuses as well as a number of other Universities in CA, including Caltech, most of them being connected at 100 Gbps. UCSD and Caltech have thus joined their disk systems into a single logical xcache system, with worker nodes from both sites accessing data from disks at either site. This setup has been in place for a couple years now and has shown to work very well. Coherently managing nodes at multiple physical locations has however not been trivial, and we have been looking for ways to improve operations. With the Pacific Research Platform (PRP) now providing a Kubernetes resource pool spanning resources in the science DMZs of all the UC campuses, we have recently migrated the xcache services from being hosted bare-metal into containers. This paper presents our experience in both migrating to and operating in the new environment

    Moving the California distributed CMS XCache from bare metal into containers using Kubernetes

    Get PDF
    The University of California system maintains excellent networking between its campuses and a number of other Universities in California, including Caltech, most of them being connected at 100 Gbps. UCSD and Caltech Tier2 centers have joined their disk systems into a single logical caching system, with worker nodes from both sites accessing data from disks at either site. This successful setup has been in place for the last two years. However, coherently managing nodes at multiple physical locations is not trivial and requires an update on the operations model used. The Pacific Research Platform (PRP) provides Kubernetes resource pool spanning resources in the science demilitarized zones (DMZs) in several campuses in California and worldwide. We show how we migrated the XCache services from bare-metal deployments into containers using the PRP cluster. This paper presents the reasoning behind our hardware decisions and the experience in migrating to and operating in a mixed environment

    Software Challenges For HL-LHC Data Analysis

    Full text link
    The high energy physics community is discussing where investment is needed to prepare software for the HL-LHC and its unprecedented challenges. The ROOT project is one of the central software players in high energy physics since decades. From its experience and expectations, the ROOT team has distilled a comprehensive set of areas that should see research and development in the context of data analysis software, for making best use of HL-LHC's physics potential. This work shows what these areas could be, why the ROOT team believes investing in them is needed, which gains are expected, and where related work is ongoing. It can serve as an indication for future research proposals and cooperations

    Parallelized and Vectorized Tracking Using Kalman Filters with CMS Detector Geometry and Events

    Full text link
    The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms at the LHC are based on Kalman filter techniques with proven excellent physics performance under a variety of conditions. Starting in 2014, we have been developing Kalman-filter-based methods for track finding and fitting adapted for many-core SIMD processors that are becoming dominant in high-performance systems. This paper summarizes the latest extensions to our software that allow it to run on the realistic CMS-2017 tracker geometry using CMSSW-generated events, including pileup. The reconstructed tracks can be validated against either the CMSSW simulation that generated the hits, or the CMSSW reconstruction of the tracks. In general, the code's computational performance has continued to improve while the above capabilities were being added. We demonstrate that the present Kalman filter implementation is able to reconstruct events with comparable physics performance to CMSSW, while providing generally better computational performance. Further plans for advancing the software are discussed

    Effect of genetic improvement of sheep in Ethiopia: Development of a dynamic stochastic simulation herd model

    Get PDF
    A community-based sheep breeding program has been implemented in the highlands of Ethiopia to improve the body weight and reproductive performance of Menz sheep. This study adopts system dynamic methodology to develop a dynamic stochastic simulation herd model to evaluate the effect of genetic improvement and additional feed sources (forage production) on herd dynamics and profitability. The study also explores the opportunities of system dynamics approach in the context of designing breeding programs to predict annual genetic gain of traits. Historical data of monthly rainfall for a period of 10 years and 4 years of monthly temperature data were used. Sheep performance data were available from the herd-book of the community-based sheep breeding program. Additional input data were sourced from questionnaires, observation, literature and expert knowledge. The simulation model consists of three sub-models: vegetation growth and dynamics, herd structure and dynamics and economic analysis. The length of the time horizon was 240 months (20 years). The first 120 months served as a baseline scenario, where the fattening of culled breeding rams was practiced. For the second 120 months genetic selection of body weight was introduced considering two scenarios: culled ram and lamb fattening. For the prediction of genetic gain genetic selection for six month weight, pre-weaning survival and fertility rate was introduced from the initial stage of the simulation. Results from the model showed a gradual decrease in sheep population size while body weight of the animals improved. The model keeps heavier animals in smaller flocks to match the herd nutritional demand with the available resources. The simulation also demonstrates that breeding for heavier body weight was considerably more profitable than the baseline scenario; and lamb fattening was more profitable than culled ram fattening, which is the current practice. Smallholder farmers can gain more income by fattening young lambs due to a decrease of production (health, housing and labour) and feed costs compared to fattening of culled breeding ram. A reasonable annual genetic gain, rate of inbreeding and profit per ewe per year were predicted when community based breeding is performed. System dynamics modelling is a valuable tool to describe breeding programs by building a simple, flexible and usage driven simulation model. Results of the simulations were discussed on site with smallholder farmers and they are willing to reduce flock size in order to have healthy, fast growing animals. Further development and evaluation of the model is needed which shall lead to further refinement. This shall eventually aid in the further evaluation of genetic selection in the Menz sheep population, helping smallholder farmers to increase income
    corecore