634 research outputs found

    Performance Evaluation of CORBA Concurrency Control Service Using Stochastic Petri Nets

    Get PDF
    The interest in performance evaluation of middleware systems is increasing. Measurement techniques are still predominant among those used to carry out performance evaluation. However, performance models are currently being defined due to their flexibility, precision and facilities to carry out capacity planning activities. This paper presents stochastic Petri net models for performance evaluation of the CORBA Concurrency Control Service (CCS), which mediates concurrent access to objects. In order to validate the proposed models, CCS performance results obtained using those models are then compared against ones obtained through actual measurements.The interest in performance evaluation of middleware systems is increasing. Measurement techniques are still predominant among those used to carry out performance evaluation. However, performance models are currently being defined due to their flexibility, precision and facilities to carry out capacity planning activities. This paper presents stochastic Petri net models for performance evaluation of the CORBA Concurrency Control Service (CCS), which mediates concurrent access to objects. In order to validate the proposed models, CCS performance results obtained using those models are then compared against ones obtained through actual measurements

    Reviewing SWAP

    Get PDF

    C.R.I.S.T.A.L. Concurrent Repository & Information System for Tracking Assembly and production Lifecycles: A data capture and production management tool for the assembly and construction of the CMS ECAL detector

    Get PDF
    The CMS experiment will comprise several very large high resolution detectors for physics. Each detector may be constructed of well over a million parts and will be produced and assembled during the next decade by specialised centres distributed world-wide. Each constituent part of each detector must be accurately measured and tested locally prior to its ultimate assembly and integration in the experimental area at CERN. The CRISTAL project (Concurrent Repository and Information System for Tracking Assembly and production Lifecycles) [1] aims to monitor and control the quality of the production and assembly process to aid in optimising the performance of the physics detectors and to reject unacceptable constituent parts as early as possible in the construction lifecycle. During assembly CRISTAL will capture all the information required for subsequent detector calibration. Distributed instances of Object databases linked via CORBA [2] and with WWW/Java-based query processing are the main technology aspects of CRISTAL.The CMS experiment will comprise several very large high resolution detectors for physics. Each detector may be constructed of well over a million parts and will be produced and assembled during the next decade by specialised centres distributed world-wide. Each constituent part of each detector must be accurately measured and tested locally prior to its ultimate assembly and integration in the experimental area at CERN. The CRISTAL project (Concurrent Repository and Information System for Tracking Assembly and production Lifecycles) [1] aims to monitor and control the quality of the production and assembly process to aid in optimising the performance of the physics detectors and to reject unacceptable constituent parts as early as possible in the construction lifecycle. During assembly CRISTAL will capture all the information required for subsequent detector calibration. Distributed instances of Object databases linked via CORBA [2] and with WWW/Java-based query processing are the main technology aspects of CRISTAL

    Third Workshop on Modelling of Objects, Components, and Agents

    Get PDF
    This booklet contains the proceedings of the Third International Workshop on Modelling of Objects, Components, and Agents (MOCA'04), October 11-13, 2004. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark and the "Theoretical Foundations of Computer Science" group at the University of Hamburg. The home page of the workshop is: http://www.daimi.au.dk/CPnets/workshop0

    Modeling and Generating Tailored Distribution Middleware for Embedded Real-Time Systems

    Get PDF
    International audienceDistributed real-time embedded (DRE) systems are becoming increasingly complex. They have to meet more and more stringent requirements, either functional or non-functional. Because of this, DRE systems development makes use of formal methods for verification; and, in some cases, generation of proven code. The distribution aspects are typically handled by a middleware, which must meet the system constraints. In this article, we describe our approach to model and generate middleware-based distributed systems for DRE applications. Our methodology is a three-step approach. First, we model the high-level inter-component interactions using connectors. We then use the Architecture Analysis and Design Language (AADL) as a pre-implementation description language to capture all the non-functional aspects of the system. Finally, we generate actual application code and the appropriate middleware from the AADL description. In order to demonstrate the feasibility of our approach, we created an application generator, Gaia. It is part of the Ocarina AADL tool suite and generates application source code for use with the PolyORB middleware

    A language and toolkit for the specification, execution and monitoring of dependable distributed applications

    Get PDF
    PhD ThesisThis thesis addresses the problem of specifying the composition of distributed applications out of existing applications, possibly legacy ones. With the automation of business processes on the increase, more and more applications of this kind are being constructed. The resulting applications can be quite complex, usually long-lived and are executed in a heterogeneous environment. In a distributed environment, long-lived activities need support for fault tolerance and dynamic reconfiguration. Indeed, it is likely that the environment where they are run will change (nodes may fail, services may be moved elsewhere or withdrawn) during their execution and the specification will have to be modified. There is also a need for modularity, scalability and openness. However, most of the existing systems only consider part of these requirements. A new area of research, called workflow management has been trying to address these issues. This work first looks at what needs to be addressed to support the specification and execution of these new applications in a heterogeneous, distributed environment. A co- ordination language (scripting language) is developed that fulfils the requirements of specifying the composition and inter-dependencies of distributed applications with the properties of dynamic reconfiguration, fault tolerance, modularity, scalability and openness. The architecture of the overall workflow system and its implementation are then presented. The system has been implemented as a set of CORBA services and the execution environment is built using a transactional workflow management system. Next, the thesis describes the design of a toolkit to specify, execute and monitor distributed applications. The design of the co-ordination language and the toolkit represents the main contribution of the thesis.UK Engineering and Physical Sciences Research Council, CaberNet, Northern Telecom (Nortel)

    Issues about the Adoption of Formal Methods for Dependable Composition of Web Services

    Full text link
    Web Services provide interoperable mechanisms for describing, locating and invoking services over the Internet; composition further enables to build complex services out of simpler ones for complex B2B applications. While current studies on these topics are mostly focused - from the technical viewpoint - on standards and protocols, this paper investigates the adoption of formal methods, especially for composition. We logically classify and analyze three different (but interconnected) kinds of important issues towards this goal, namely foundations, verification and extensions. The aim of this work is to individuate the proper questions on the adoption of formal methods for dependable composition of Web Services, not necessarily to find the optimal answers. Nevertheless, we still try to propose some tentative answers based on our proposal for a composition calculus, which we hope can animate a proper discussion
    • …
    corecore