10,775 research outputs found

    The AliEn system, status and perspectives

    Full text link
    AliEn is a production environment that implements several components of the Grid paradigm needed to simulate, reconstruct and analyse HEP data in a distributed way. The system is built around Open Source components, uses the Web Services model and standard network protocols to implement the computing platform that is currently being used to produce and analyse Monte Carlo data at over 30 sites on four continents. The aim of this paper is to present the current AliEn architecture and outline its future developments in the light of emerging standards.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, Word, 10 figures. PSN MOAT00

    Mobile Computing in Physics Analysis - An Indicator for eScience

    Full text link
    This paper presents the design and implementation of a Grid-enabled physics analysis environment for handheld and other resource-limited computing devices as one example of the use of mobile devices in eScience. Handheld devices offer great potential because they provide ubiquitous access to data and round-the-clock connectivity over wireless links. Our solution aims to provide users of handheld devices the capability to launch heavy computational tasks on computational and data Grids, monitor the jobs status during execution, and retrieve results after job completion. Users carry their jobs on their handheld devices in the form of executables (and associated libraries). Users can transparently view the status of their jobs and get back their outputs without having to know where they are being executed. In this way, our system is able to act as a high-throughput computing environment where devices ranging from powerful desktop machines to small handhelds can employ the power of the Grid. The results shown in this paper are readily applicable to the wider eScience community.Comment: 8 pages, 7 figures. Presented at the 3rd Int Conf on Mobile Computing & Ubiquitous Networking (ICMU06. London October 200

    Policy based roles for distributed systems security

    No full text
    Distributed systems are increasingly being used in commercial environments necessitating the development of trustworthy and reliable security mechanisms. There is often no clear informal or formal specification of enterprise authorisation policies and no tools to translate policy specifications to access control implementation mechanisms such as capabilities or Access Control Lists. It is thus difficult to analyse the policy to detect conflicts or flaws and it is difficult to verify that the implementation corresponds to the policy specification. We present in this paper a framework for the specification of management policies. We are concerned with two types of policies: obligations which specify what activities a manager or agent must or must not perform on a set of target objects and authorisations which specify what activities a subject (manager or agent) can or can not perform on the set of target objects. Management policies are then grouped into roles reflecting the organisation..

    A study of System Interface Sets (SIS) for the host, target and integration environments of the Space Station Program (SSP)

    Get PDF
    System interface sets (SIS) for large, complex, non-stop, distributed systems are examined. The SIS of the Space Station Program (SSP) was selected as the focus of this study because an appropriate virtual interface specification of the SIS is believed to have the most potential to free the project from four life cycle tyrannies which are rooted in a dependance on either a proprietary or particular instance of: operating systems, data management systems, communications systems, and instruction set architectures. The static perspective of the common Ada programming support environment interface set (CAIS) and the portable common execution environment (PCEE) activities are discussed. Also, the dynamic perspective of the PCEE is addressed

    The Clarens web services architecture

    Get PDF
    Clarens is a uniquely flexible web services infrastructure providing a unified access protocol to a diverse set of functions useful to the HEP community. It uses the standard HTTP protocol combined with application layer, certificate based authentication to provide single sign-on to individuals, organizations and hosts, with fine-grained access control to services, files and virtual organization (VO) management. This contribution describes the server functionality, while client applications are described in a subsequent talk.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 6 pages, LaTeX, 4 figures, PSN MONT00

    ROOT - A C++ Framework for Petabyte Data Storage, Statistical Analysis and Visualization

    Full text link
    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, ROOT offers packages for complex data modeling and fitting, as well as multivariate classification based on machine learning techniques. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way

    Cosmological Simulations on a Grid of Computers

    Get PDF
    The work presented in this paper aims at restricting the input parameter values of the semi-analytical model used in GALICS and MOMAF, so as to derive which parameters influence the most the results, e.g., star formation, feedback and halo recycling efficiencies, etc. Our approach is to proceed empirically: we run lots of simulations and derive the correct ranges of values. The computation time needed is so large, that we need to run on a grid of computers. Hence, we model GALICS and MOMAF execution time and output files size, and run the simulation using a grid middleware: DIET. All the complexity of accessing resources, scheduling simulations and managing data is harnessed by DIET and hidden behind a web portal accessible to the users.Comment: Accepted and Published in AIP Conference Proceedings 1241, 2010, pages 816-82
    • …
    corecore