19,642 research outputs found

    Ten virtues of structured graphs

    Get PDF
    This paper extends the invited talk by the first author about the virtues of structured graphs. The motivation behind the talk and this paper relies on our experience on the development of ADR, a formal approach for the design of styleconformant, reconfigurable software systems. ADR is based on hierarchical graphs with interfaces and it has been conceived in the attempt of reconciling software architectures and process calculi by means of graphical methods. We have tried to write an ADR agnostic paper where we raise some drawbacks of flat, unstructured graphs for the design and analysis of software systems and we argue that hierarchical, structured graphs can alleviate such drawbacks

    A characterisation of reliability tools for Software Defined Networks

    Get PDF
    Software Defined Network (SDN) is a new paradigm in networking that introduces great flexibility, allowing the dynamic configuration of parts of the network through centralised programming. SDN has been successfully applied in field networks, and is now being applied to wireless and mobile networks, generating Software Defined Mobile/Wireless networks (SDWNs). SDN can be also combined with Network Function Virtualization (NFV) producing a software network in which the specific hardware is replaced by general purpose computing equipment running SDN and NFV software solutions. This highly programmable and flexible network introduces many challenges from the point of view of reliability (or robustness), and operators need to ensure the same level of confidence as in previous, less flexible deployments. This paper provides a first study of the current tools used to analyse the reliability of SDNs before deployment and/or during the exploitation of the network. Most of these tools offer some kind of automatic verification, supported by algorithms based on formal methods, but they do not differentiate between fixed and mobile/wireless networks. In the paper we provide a number of classifications of the tools to make this selection easier for potential users, and we also identify promising research areas where more effort needs to be made.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    ERIGrid Holistic Test Description for Validating Cyber-Physical Energy Systems

    Get PDF
    Smart energy solutions aim to modify and optimise the operation of existing energy infrastructure. Such cyber-physical technology must be mature before deployment to the actual infrastructure, and competitive solutions will have to be compliant to standards still under development. Achieving this technology readiness and harmonisation requires reproducible experiments and appropriately realistic testing environments. Such testbeds for multi-domain cyber-physical experiments are complex in and of themselves. This work addresses a method for the scoping and design of experiments where both testbed and solution each require detailed expertise. This empirical work first revisited present test description approaches, developed a newdescription method for cyber-physical energy systems testing, and matured it by means of user involvement. The new Holistic Test Description (HTD) method facilitates the conception, deconstruction and reproduction of complex experimental designs in the domains of cyber-physical energy systems. This work develops the background and motivation, offers a guideline and examples to the proposed approach, and summarises experience from three years of its application.This work received funding in the European Community’s Horizon 2020 Program (H2020/2014–2020) under project “ERIGrid” (Grant Agreement No. 654113)

    A Domain Specific Approach to High Performance Heterogeneous Computing

    Full text link
    Users of heterogeneous computing systems face two problems: firstly, in understanding the trade-off relationships between the observable characteristics of their applications, such as latency and quality of the result, and secondly, how to exploit knowledge of these characteristics to allocate work to distributed computing platforms efficiently. A domain specific approach addresses both of these problems. By considering a subset of operations or functions, models of the observable characteristics or domain metrics may be formulated in advance, and populated at run-time for task instances. These metric models can then be used to express the allocation of work as a constrained integer program, which can be solved using heuristics, machine learning or Mixed Integer Linear Programming (MILP) frameworks. These claims are illustrated using the example domain of derivatives pricing in computational finance, with the domain metrics of workload latency or makespan and pricing accuracy. For a large, varied workload of 128 Black-Scholes and Heston model-based option pricing tasks, running upon a diverse array of 16 Multicore CPUs, GPUs and FPGAs platforms, predictions made by models of both the makespan and accuracy are generally within 10% of the run-time performance. When these models are used as inputs to machine learning and MILP-based workload allocation approaches, a latency improvement of up to 24 and 270 times over the heuristic approach is seen.Comment: 14 pages, preprint draft, minor revisio

    Performance requirements verification during software systems development

    Get PDF
    Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests

    On Properties of Policy-Based Specifications

    Get PDF
    The advent of large-scale, complex computing systems has dramatically increased the difficulties of securing accesses to systems' resources. To ensure confidentiality and integrity, the exploitation of access control mechanisms has thus become a crucial issue in the design of modern computing systems. Among the different access control approaches proposed in the last decades, the policy-based one permits to capture, by resorting to the concept of attribute, all systems' security-relevant information and to be, at the same time, sufficiently flexible and expressive to represent the other approaches. In this paper, we move a step further to understand the effectiveness of policy-based specifications by studying how they permit to enforce traditional security properties. To support system designers in developing and maintaining policy-based specifications, we formalise also some relevant properties regarding the structure of policies. By means of a case study from the banking domain, we present real instances of such properties and outline an approach towards their automatised verification.Comment: In Proceedings WWV 2015, arXiv:1508.0338

    Unlocking Blocked Communicating Processes

    Full text link
    We study the problem of disentangling locked processes via code refactoring. We identify and characterise a class of processes that is not lock-free; then we formalise an algorithm that statically detects potential locks and propose refactoring procedures that disentangle detected locks. Our development is cast within a simple setting of a finite linear CCS variant \^a although it suffices to illustrate the main concepts, we also discuss how our work extends to other language extensions.Comment: In Proceedings WWV 2015, arXiv:1508.0338

    Modelling, reduction and analysis of Markov automata (extended version)

    Get PDF
    Markov automata (MA) constitute an expressive continuous-time compositional modelling formalism. They appear as semantic backbones for engineering frameworks including dynamic fault trees, Generalised Stochastic Petri Nets, and AADL. Their expressive power has thus far precluded them from effective analysis by probabilistic (and statistical) model checkers, stochastic game solvers, or analysis tools for Petri net-like formalisms. This paper presents the foundations and underlying algorithms for efficient MA modelling, reduction using static analysis, and most importantly, quantitative analysis. We also discuss implementation pragmatics of supporting tools and present several case studies demonstrating feasibility and usability of MA in practice
    corecore