375 research outputs found

    April-May 2006

    Get PDF

    April-May 2007

    Get PDF

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Grid accounting for computing and storage resources towards standardization

    Get PDF
    In the last years, we have seen a growing interest of the scientific community first and commercial vendors then, in new technologies like Grid and Cloud computing. The first in particular, was born to meet the enormous computational requests mostly coming from physic experiments, especially Large Hadron Collider's (LHC) experiments at Conseil EuropĂ©en pour la Recherche NuclĂ©aire (European Laboratory for Particle Physics) (CERN) in Geneva. Other scientific disciplines that are also benefiting from those technologies are biology, astronomy, earth sciences, life sciences, etc. Grid systems allow the sharing of heterogeneous computational and storage resources between different geographically distributed institutes, agencies or universities. For this purpose technologies have been developed to allow communication, authentication, storing and processing of the required software and scientific data. This allows different scientific communities the access to computational resources that a single institute could not host for logistical and cost reasons. Grid systems were not the only answer to this growing need of resources of different communities. At the same time, in the last years, we have seen the affirmation of the so called Cloud Computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g.: networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The use of both computational paradigms and the utilization of storage resources, leverage on different authentication and authorization tools. The utilization of those technologies requires systems for the accounting of the consumed resources. Those systems are built on the top of the existing infrastructure and they collect all the needed data related to the users, groups and resources utilized. This information is then collected in central repositories where they can be analyzed and aggregated. Open Grid Forum (OGF) is the international organism that works to develop standards in the Grid environment. Usage Record - Working Group (UR-WG) is a group, born within OGF aiming at standardizing the Usage Record (UR) structure and publication for different kinds of resources. Up to now, the emphasis has been on the accounting for computational resources. With time it came out the need to expand those concepts to other aspects and especially to a definition and implementation of a standard UR for storage accounting. Several extensions to the UR definition are proposed in this thesis and the proposed developments in this field are described. The Distributed Grid Accounting System (DGAS) has been chosen, among other tools available, as the accounting system for the Italian Grid and is also adopted in other countries such as Greece and Germany. Together with HLRmon, it offers a complete accounting system and it is the tool that has been used during the writing of the thesis at INFN-CNAF. ‱ In Chapter 1, I will focus on the paradigm of distributed computing and the Grid infrastructure will be introduced with particular emphasis on the gLite middleware and the EGI-InSPIRE project. ‱ In Chapter 2, I will discuss some Grid accounting systems for computational resources with particular stress for DGAS. ‱ In Chapter 3, the cross-check monitoring system used to check the correctness of the gathered data at the INFN-CNAF's Tier1 is presented. ‱ In Chapter 4, another important aspect on accounting, accounting for storage resources, is introduced and the definition of a standard UR for storage accounting is presented. ‱ In Chapter 5, an implementation of a new accounting system for the storage that uses the definitions given in Chapter 4 is presented. ‱ In Chapter 6, the focus of the thesis move on the performance and reliability tests performed on the latest development release of DGAS that implements ActiveMQ as a standard transport mechanism. ‱ In Appendix A are collected the BASH scripts and SQL code that are part of the cross-check tool described in Chapter 3. ‱ In Appendix B are collected the scripts used in the implementation of the accounting system described in Chapter 5. ‱ In Appendix C are collected the scripts and configurations used for the tests of the ActiveMQ implementation of DGAS described in Chapter 6. ‱ In Appendix D are collected the publications in which I contributed during the thesis wor

    Integrating multiple clusters for compute-intensive applications

    Get PDF
    Multicluster grids provide one promising solution to satisfying the growing computational demands of compute-intensive applications. However, it is challenging to seamlessly integrate all participating clusters in different domains into a single virtual computational platform. In order to fully utilize the capabilities of multicluster grids, computer scientists need to deal with the issue of joining together participating autonomic systems practically and efficiently to execute grid-enabled applications. Driven by several compute-intensive applications, this theses develops a multicluster grid management toolkit called Pelecanus to bridge the gap between user\u27s needs and the system\u27s heterogeneity. Application scientists will be able to conduct very large-scale execution across multiclusters with transparent QoS assurance. A novel model called DA-TC (Dynamic Assignment with Task Containers) is developed and is integrated into Pelecanus. This model uses the concept of a task container that allows one to decouple resource allocation from resource binding. It employs static load balancing for task container distribution and dynamic load balancing for task assignment. The slowest resources become useful rather than be bottlenecks in this manner. A cluster abstraction is implemented, which not only provides various cluster information for the DA-TC execution model, but also can be used as a standalone toolkit to monitor and evaluate the clusters\u27 functionality and performance. The performance of the proposed DA-TC model is evaluated both theoretically and experimentally. Results demonstrate the importance of reducing queuing time in decreasing the total turnaround time for an application. Experiments were conducted to understand the performance of various aspects of the DA-TC model. Experiments showed that our model could significantly reduce turnaround time and increase resource utilization for our targeted application scenarios. Four applications are implemented as case studies to determine the applicability of the DA-TC model. In each case the turnaround time is greatly reduced, which demonstrates that the DA-TC model is efficient for assisting application scientists in conducting their research. In addition, virtual resources were integrated into the DA-TC model for application execution. Experiments show that the execution model proposed in this thesis can work seamlessly with multiple hybrid grid/cloud resources to achieve reduced turnaround time

    Belle II Technical Design Report

    Full text link
    The Belle detector at the KEKB electron-positron collider has collected almost 1 billion Y(4S) events in its decade of operation. Super-KEKB, an upgrade of KEKB is under construction, to increase the luminosity by two orders of magnitude during a three-year shutdown, with an ultimate goal of 8E35 /cm^2 /s luminosity. To exploit the increased luminosity, an upgrade of the Belle detector has been proposed. A new international collaboration Belle-II, is being formed. The Technical Design Report presents physics motivation, basic methods of the accelerator upgrade, as well as key improvements of the detector.Comment: Edited by: Z. Dole\v{z}al and S. Un

    Web-Based Visualization of Very Large Scientific Astronomy Imagery

    Full text link
    Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.Comment: Published in Astronomy & Computing. IIPImage server available from http://iipimage.sourceforge.net . Visiomatic code and demos available from http://www.visiomatic.org

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of ÎŒs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers
    • 

    corecore