595 research outputs found

    Systems Engineering Leading Indicators Guide, Version 2.0

    Get PDF
    The Systems Engineering Leading Indicators Guide editorial team is pleased to announce the release of Version 2.0. Version 2.0 supersedes Version 1.0, which was released in July 2007 and was the result of a project initiated by the Lean Advancement Initiative (LAI) at MIT in cooperation with: the International Council on Systems Engineering (INCOSE), Practical Software and Systems Measurement (PSM), and the Systems Engineering Advancement Research Initiative (SEAri) at MIT. A leading indicator is a measure for evaluating the effectiveness of how a specific project activity is likely to affect system performance objectives. A leading indicator may be an individual measure or a collection of measures and associated analysis that is predictive of future systems engineering performance. Systems engineering performance itself could be an indicator of future project execution and system performance. Leading indicators aid leadership in delivering value to customers and end users and help identify interventions and actions to avoid rework and wasted effort. Conventional measures provide status and historical information. Leading indicators use an approach that draws on trend information to allow for predictive analysis. By analyzing trends, predictions can be forecast on the outcomes of certain activities. Trends are analyzed for insight into both the entity being measured and potential impacts to other entities. This provides leaders with the data they need to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. Version 2.0 guide adds five new leading indicators to the previous 13 for a new total of 18 indicators. The guide addresses feedback from users of the previous version of the guide, as well as lessons learned from implementation and industry workshops. The document format has been improved for usability, and several new appendices provide application information and techniques for determining correlations of indicators. Tailoring of the guide for effective use is encouraged. Additional collaborating organizations involved in Version 2.0 include the Naval Air Systems Command (NAVAIR), US Department of Defense Systems Engineering Research Center (SERC), and National Defense Industrial Association (NDIA) Systems Engineering Division (SED). Many leading measurement and systems engineering experts from government, industry, and academia volunteered their time to work on this initiative

    Systems Engineering Leading Indicators Guide, Version 1.0

    Get PDF
    The Systems Engineering Leading Indicators guide set reflects the initial subset of possible indicators that were considered to be the highest priority for evaluating effectiveness before the fact. A leading indicator is a measure for evaluating the effectiveness of a how a specific activity is applied on a program in a manner that provides information about impacts that are likely to affect the system performance objectives. A leading indicator may be an individual measure, or collection of measures, that are predictive of future system performance before the performance is realized. Leading indicators aid leadership in delivering value to customers and end users, while assisting in taking interventions and actions to avoid rework and wasted effort. The Systems Engineering Leading Indicators Guide was initiated as a result of the June 2004 Air Force/LAI Workshop on Systems Engineering for Robustness, this guide supports systems engineering revitalization. Over several years, a group of industry, government, and academic stakeholders worked to define and validate a set of thirteen indicators for evaluating the effectiveness of systems engineering on a program. Released as version 1.0 in June 2007 the leading indicators provide predictive information to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. While the leading indicators appear similar to existing measures and often use the same base information, the difference lies in how the information is gathered, evaluated, interpreted and used to provide a forward looking perspective

    Towards an information driven software development life cycle

    Get PDF
    Although software engineering has matured greatly over the years, a large number of ICT projects continue to fail. Studies continue to identify non-technical issues such as poor communication, shifting requirements and poor executive involvement as the main causes of these failures. This paper identifies such well known causes and poses the question as to why currently available software development life cycles fall short of dealing with them. Drawing on results from a research exercise carried out by the authors, a link is made between the quality of information used throughout the development life cycle and the quality of the resultant product. Consequently, it is proposed that organisations knowingly or unknowingly maintain a knowledge context and the quality of this knowledge context has direct impact on product quality. Furthermore, it is proposed that a software development life cycle be developed in which participants do not focus explicitly on the traditional phases of software development. Rather, a conscious decision is made to focus instead on information which is being created, manipulated and utilised throughout the lifetime of a product. If a link can be established between the quality of the knowledge context and the quality of a finished product, then it is sound to argue that if one nurtures a high quality knowledge context, a high- quality product will naturally follow.peer-reviewe

    LARVA - safer monitoring of real-time Java programs (tool paper)

    Get PDF
    The use of runtime verification, as a lightweight approach to guarantee properties of systems, has been increasingly employed on real-life software. In this paper, we present the tool LARVA, for the runtime verification of properties of Java programs, including real-time properties. Properties can be expressed in a number of notations, including timed-automata enriched with stopwatches, Lustre, and a subset of the duration calculus. The tool has been successfully used on a number of case-studies, including an industrial system handling financial transactions. LARVA also performs analysis of real-time properties, to calculate, if possible, an upper-bound on the memory and temporal overheads induced by monitoring. Moreover, through property analysis, LARVA assesses the impact of slowing down the system through monitoring, on the satisfaction of the properties.peer-reviewe

    Improving runtime overheads for detectEr

    Get PDF
    We design monitor optimisations for detectEr, a runtime-verification tool synthesising systems of concurrent monitors from correctness properties for Erlang programs. We implement these optimisations as part of the existing tool and show that they yield considerably lower runtime overheads when compared to the unoptimised monitor synthesis.peer-reviewe

    PolyLarva : technology agnostic runtime verification

    Get PDF
    With numerous specialised technologies available to industry, it is become increasingly frequent for computer systems to be composed of heterogeneous components, built over, and using different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component system. The approach is then applied to a case study with C and Java components.peer-reviewe

    Extensible technology agnostic runtime verification

    Get PDF
    With numerous specialised technologies available to industry, it has become increasingly frequent for computer systems to be composed of heterogeneous components built over, and using, different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component systems. The approach is then applied to a case study of a component-based artefact using different technologies, namely C and Java.peer-reviewe
    corecore