1,658 research outputs found

    Querying Large Physics Data Sets Over an Information Grid

    Get PDF
    Optimising use of the Web (WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive amounts of information distributed worldwide. Finding the right piece of information can, at times, be extremely time-consuming, if not impossible. So-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication, data migration and networking philosophies. Other aspects such as the role of 'middleware' for Grids are emerging as requiring research. This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets. It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just Computational or Data Grids. This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets, initially being employed for the construction of CMS detectors.Comment: 4 pages, 3 figure

    Cosmological Simulations on a Grid of Computers

    Get PDF
    The work presented in this paper aims at restricting the input parameter values of the semi-analytical model used in GALICS and MOMAF, so as to derive which parameters influence the most the results, e.g., star formation, feedback and halo recycling efficiencies, etc. Our approach is to proceed empirically: we run lots of simulations and derive the correct ranges of values. The computation time needed is so large, that we need to run on a grid of computers. Hence, we model GALICS and MOMAF execution time and output files size, and run the simulation using a grid middleware: DIET. All the complexity of accessing resources, scheduling simulations and managing data is harnessed by DIET and hidden behind a web portal accessible to the users.Comment: Accepted and Published in AIP Conference Proceedings 1241, 2010, pages 816-82

    An Error Handling Framework for the ORBWork Workflow Enactment Service of METEOR

    Get PDF
    Workflow Management Systems (WFMSs) can be used to re-engineer, streamline, automate, and track organizational processes involving humans and automated information systems. However, the state-of-the-art in workflow technology suffers from a number of limitations that prevent it from being widely used in large-scale mission critical applications. Error handling is one such issue. What makes the task of error handling challenging is the need to deal with errors that appear in various components of a complex distributed application execution environment, including various WFMS components, workflow application tasks of different types, and the heterogeneous computing infrastructure. In this paper, we discuss a top-down approach towards dealing with errors in the context of ORBWork, a CORBA-based fully distributed workflow enactment service for the METEOR2 WFMS. The paper discusses the types of errors that might occur including those involving the infrastructure of the enactment environment, system architecture of the workflow enactment service. In the context of the underlying workflow model for METEOR, we then present a three-level error model to provide a unified approach to specification, detection, and runtime recovery of errors in ORBWork. Implementation issues are also discussed. We expect the model and many of the techniques to be relevant and adaptable to other WFMS implementations

    DIET : new developments and recent results

    Get PDF
    Among existing grid middleware approaches, one simple, powerful, and flexibleapproach consists of using servers available in different administrative domainsthrough the classic client-server or Remote Procedure Call (RPC) paradigm.Network Enabled Servers (NES) implement this model also called GridRPC.Clients submit computation requests to a scheduler whose goal is to find aserver available on the grid. The aim of this paper is to give an overview of anNES middleware developed in the GRAAL team called DIET and to describerecent developments. DIET (Distributed Interactive Engineering Toolbox) is ahierarchical set of components used for the development of applications basedon computational servers on the grid.Parmi les intergiciels de grilles existants, une approche simple, flexible et performante consiste a utiliser des serveurs disponibles dans des domaines administratifs diffĂ©rents Ă  travers le paradigme classique de l’appel de procĂ©dure Ă distance (RPC). Les environnements de ce type, connus sous le terme de Network Enabled Servers, implĂ©mentent ce modĂšle appelĂ© GridRPC. Des clientssoumettent des requĂȘtes de calcul Ă  un ordonnanceur dont le but consiste Ă trouver un serveur disponible sur la grille.Le but de cet article est de donner un tour d’horizon d’un intergiciel dĂ©veloppĂ©dans le projet GRAAL appelĂ© DIET 1. DIET (Distributed Interactive Engineering Toolbox) est un ensemble hiĂ©rarchique de composants utilisĂ©s pour ledĂ©veloppement d’applications basĂ©es sur des serveurs de calcul sur la grille

    Forum Session at the First International Conference on Service Oriented Computing (ICSOC03)

    Get PDF
    The First International Conference on Service Oriented Computing (ICSOC) was held in Trento, December 15-18, 2003. The focus of the conference ---Service Oriented Computing (SOC)--- is the new emerging paradigm for distributed computing and e-business processing that has evolved from object-oriented and component computing to enable building agile networks of collaborating business applications distributed within and across organizational boundaries. Of the 181 papers submitted to the ICSOC conference, 10 were selected for the forum session which took place on December the 16th, 2003. The papers were chosen based on their technical quality, originality, relevance to SOC and for their nature of being best suited for a poster presentation or a demonstration. This technical report contains the 10 papers presented during the forum session at the ICSOC conference. In particular, the last two papers in the report ere submitted as industrial papers

    CORBA-basierte Workflow-Architekturen : Die objektorientierte Kernanwendung der Bausparkasse Mainz AG

    Get PDF
    Bei der Initialisierung von Workflow-Projekten zur UnterstĂŒtzung von GeschĂ€ftsprozessen stellt sich die Frage, ob und warum ein Unternehmen angesichts der zahlreichen, auf dem Markt erhĂ€ltlichen Standardsysteme ein individuelles Workflow-System entwickeln sollte. Eine Reihe von Argumenten deutet darauf hin, daß individuelle Eigenentwicklungen durchaus eine erwĂ€genswerte Alternative zu den existierenden Standardsystemen darstellen. Aus der Diskussion dieses Aspektes ergibt sich u. a. die Frage, ob sich die Eigenentwicklung eines CORBA-konformen Workflow-Systems lohnt. Vielversprechende, bereits realisierte Systeme setzen auf die standardisierte, durchgĂ€ngig objektorientierte Architektur der Object Management Group (OMG). Deren Standard \u27CORBA\u27 (Common Object Request Broker) bietet zukunftsweisende technologische Vorteile (z. B. Verteiltheit, PlattformunabhĂ€ngigkeit, InteroperabilitĂ€t, ModularitĂ€t) und weist Synergieeffekte zum Workflow-Konzept auf. Die Bausparkasse Mainz AG (BKM) hat sich bereits 1996 fĂŒr die Eigenentwicklung eines CORBA-konformen Workflow-Systems entschieden; die neue Kernanwendung \u27BKM-Joker\u27 der BKM wird als Abschluß des vorliegenden Beitrages skizziert

    Design Patterns for Description-Driven Systems

    Full text link
    In data modelling, product information has most often been handled separately from process information. The integration of product and process models in a unified data model could provide the means by which information could be shared across an enterprise throughout the system lifecycle from design through to production. Recently attempts have been made to integrate these two separate views of systems through identifying common data models. This paper relates description-driven systems to multi-layer architectures and reveals where existing design patterns facilitate the integration of product and process models and where patterns are missing or where existing patterns require enrichment for this integration. It reports on the construction of a so-called description-driven system which integrates Product Data Management (PDM) and Workflow Management (WfM) data models through a common meta-model.Comment: 14 pages, 13 figures. Presented at the 3rd Enterprise Distributed Object Computing EDOC'99 conference. Mannheim, Germany. September 199
    • 

    corecore