13 research outputs found

    Rate-Based Transition Systems for Stochastic Process Calculi

    Get PDF
    A variant of Rate Transition Systems (RTS), proposed by Klin and Sassone, is introduced and used as the basic model for defining stochastic behaviour of processes. The transition relation used in our variant associates to each process, for each action, the set of possible futures paired with a measure indicating their rates. We show how RTS can be used for providing the operational semantics of stochastic extensions of classical formalisms, namely CSP and CCS. We also show that our semantics for stochastic CCS guarantees associativity of parallel composition. Similarly, in contrast with the original definition by Priami, we argue that a semantics for stochastic π-calculus can be provided that guarantees associativity of parallel composition

    USLV: Unspanned Stochastic Local Volatility Model

    Full text link
    We propose a new framework for modeling stochastic local volatility, with potential applications to modeling derivatives on interest rates, commodities, credit, equity, FX etc., as well as hybrid derivatives. Our model extends the linearity-generating unspanned volatility term structure model by Carr et al. (2011) by adding a local volatility layer to it. We outline efficient numerical schemes for pricing derivatives in this framework for a particular four-factor specification (two "curve" factors plus two "volatility" factors). We show that the dynamics of such a system can be approximated by a Markov chain on a two-dimensional space (Z_t,Y_t), where coordinates Z_t and Y_t are given by direct (Kroneker) products of values of pairs of curve and volatility factors, respectively. The resulting Markov chain dynamics on such partly "folded" state space enables fast pricing by the standard backward induction. Using a nonparametric specification of the Markov chain generator, one can accurately match arbitrary sets of vanilla option quotes with different strikes and maturities. Furthermore, we consider an alternative formulation of the model in terms of an implied time change process. The latter is specified nonparametrically, again enabling accurate calibration to arbitrary sets of vanilla option quotes.Comment: Sections 3.2 and 3.3 are re-written, 3 figures adde

    Performance requirements verification during software systems development

    Get PDF
    Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests

    Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World

    Get PDF
    This is the first deliverable of WP5, which covers Conceptual Models for Assessment & Assurance of Dependability, Security and Privacy in the Eternal CONNECTed World. As described in the project DOW, in this document we cover the following topics: • Metrics definition • Identification of limitations of current V&V approaches and exploration of extensions/refinements/ new developments • Identification of security, privacy and trust models WP5 focus is on dependability concerning the peculiar aspects of the project, i.e., the threats deriving from on-the-fly synthesis of CONNECTors. We explore appropriate means for assessing/guaranteeing that the CONNECTed System yields acceptable levels for non-functional properties, such as reliability (e.g., the CONNECTor will ensure continued communication without interruption), security and privacy (e.g., the transactions do not disclose confidential data), trust (e.g., Networked Systems are put in communication only with parties they trust). After defining a conceptual framework for metrics definition, we present the approaches to dependability in CONNECT, which cover: i) Model-based V&V, ii) Security enforcement and iii) Trust management. The approaches are centered around monitoring, to allow for on-line analysis. Monitoring is performed alongside the functionalities of the CONNECTed System and is used to detect conditions that are deemed relevant by its clients (i.e., the other CONNECT Enablers). A unified lifecycle encompassing dependability analysis, security enforcement and trust management is outlined, spanning over discovery time, synthesis time and execution time
    corecore