262 research outputs found

    Systems Engineering Leading Indicators Guide, Version 2.0

    Get PDF
    The Systems Engineering Leading Indicators Guide editorial team is pleased to announce the release of Version 2.0. Version 2.0 supersedes Version 1.0, which was released in July 2007 and was the result of a project initiated by the Lean Advancement Initiative (LAI) at MIT in cooperation with: the International Council on Systems Engineering (INCOSE), Practical Software and Systems Measurement (PSM), and the Systems Engineering Advancement Research Initiative (SEAri) at MIT. A leading indicator is a measure for evaluating the effectiveness of how a specific project activity is likely to affect system performance objectives. A leading indicator may be an individual measure or a collection of measures and associated analysis that is predictive of future systems engineering performance. Systems engineering performance itself could be an indicator of future project execution and system performance. Leading indicators aid leadership in delivering value to customers and end users and help identify interventions and actions to avoid rework and wasted effort. Conventional measures provide status and historical information. Leading indicators use an approach that draws on trend information to allow for predictive analysis. By analyzing trends, predictions can be forecast on the outcomes of certain activities. Trends are analyzed for insight into both the entity being measured and potential impacts to other entities. This provides leaders with the data they need to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. Version 2.0 guide adds five new leading indicators to the previous 13 for a new total of 18 indicators. The guide addresses feedback from users of the previous version of the guide, as well as lessons learned from implementation and industry workshops. The document format has been improved for usability, and several new appendices provide application information and techniques for determining correlations of indicators. Tailoring of the guide for effective use is encouraged. Additional collaborating organizations involved in Version 2.0 include the Naval Air Systems Command (NAVAIR), US Department of Defense Systems Engineering Research Center (SERC), and National Defense Industrial Association (NDIA) Systems Engineering Division (SED). Many leading measurement and systems engineering experts from government, industry, and academia volunteered their time to work on this initiative

    Systems Engineering Leading Indicators Guide, Version 1.0

    Get PDF
    The Systems Engineering Leading Indicators guide set reflects the initial subset of possible indicators that were considered to be the highest priority for evaluating effectiveness before the fact. A leading indicator is a measure for evaluating the effectiveness of a how a specific activity is applied on a program in a manner that provides information about impacts that are likely to affect the system performance objectives. A leading indicator may be an individual measure, or collection of measures, that are predictive of future system performance before the performance is realized. Leading indicators aid leadership in delivering value to customers and end users, while assisting in taking interventions and actions to avoid rework and wasted effort. The Systems Engineering Leading Indicators Guide was initiated as a result of the June 2004 Air Force/LAI Workshop on Systems Engineering for Robustness, this guide supports systems engineering revitalization. Over several years, a group of industry, government, and academic stakeholders worked to define and validate a set of thirteen indicators for evaluating the effectiveness of systems engineering on a program. Released as version 1.0 in June 2007 the leading indicators provide predictive information to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. While the leading indicators appear similar to existing measures and often use the same base information, the difference lies in how the information is gathered, evaluated, interpreted and used to provide a forward looking perspective

    A GPU-based survey for millisecond radio transients using ARTEMIS

    Get PDF
    Astrophysical radio transients are excellent probes of extreme physical processes originating from compact sources within our Galaxy and beyond. Radio frequency signals emitted from these objects provide a means to study the intervening medium through which they travel. Next generation radio telescopes are designed to explore the vast unexplored parameter space of high time resolution astronomy, but require High Performance Computing (HPC) solutions to process the enormous volumes of data that are produced by these telescopes. We have developed a combined software /hardware solution (code named ARTEMIS) for real-time searches for millisecond radio transients, which uses GPU technology to remove interstellar dispersion and detect millisecond radio bursts from astronomical sources in real-time. Here we present an introduction to ARTEMIS. We give a brief overview of the software pipeline, then focus specifically on the intricacies of performing incoherent de-dispersion. We present results from two brute-force algorithms. The first is a GPU based algorithm, designed to exploit the L1 cache of the NVIDIA Fermi GPU. Our second algorithm is CPU based and exploits the new AVX units in Intel Sandy Bridge CPUs.Comment: 4 pages, 7 figures. To appear in the proceedings of ADASS XXI, ed. P.Ballester and D.Egret, ASP Conf. Se

    Application of ensemble techniques in predicting object-oriented software maintainability

    Get PDF
    While prior object-oriented software maintainability literature acknowledges the role of machine learning techniques as valuable predictors of potential change, the most suitable technique that achieves consistently high accuracy remains undetermined. With the objective of obtaining more consistent results, an ensemble technique is investigated to advance the performance of the individual models and increase their accuracy in predicting software maintainability of the object-oriented system. This paper describes the research plan for predicting object-oriented software maintainability using ensemble techniques. First, we present a brief overview of the main research background and its different components. Second, we explain the research methodology. Third, we provide expected results. Finally, we conclude summary of the current status

    Formic acid synthesis using CO₂ as raw material: Techno-economic and environmental evaluation and market potential

    Get PDF
    The future of carbon dioxide utilisation (CDU) processes, depend on (i) the future demand of synthesised products with CO₂, (ii) the availability of captured and anthropogenic CO₂, (iii) the overall CO₂ not emitted because of the use of the CDU process, and (iv) the economics of the plant. The current work analyses the mentioned statements through different technological, economic and environmental key performance indicators to produce formic acid from CO₂, along with their potential use and penetration in the European context. Formic acid is a well-known chemical that has potential as hydrogen carrier and as fuel for fuel cells. This work utilises process flow modelling, with simulations developed in CHEMCAD, to obtain the energy and mass balances, and the purchase equipment cost of the formic acid plant. Through a financial analysis, with the net present value as selected metric, the price of the tonne of formic acid and of CO₂ are varied to make the CDU project financially feasible. According to our research, the process saves CO₂ emissions when compared to its corresponding conventional process, under specific conditions. The success or effectiveness of the CDU process will also depend on other technologies and/or developments, like the availability of renewable electricity and steam

    Consorcio para la colaboración en I+D+I en Temas de Cloud Computing, Big Data y Emerging Topics (CCC-BD&ET) : Proyecto Integrador: “Transformación Digital en la incorporación de la Resiliencia como un Key Performance Indicator de Prestaciones Sociales (KPIS)”

    Get PDF
    El Consorcio de I+D+i en Cloud Computing, Big Data & Emerging Topics (CCC-BD&ET) es una iniciativa para fomentar y formalizar la colaboración existente entre grupos de investigación de varias universidades en temáticas vinculadas a Cloud Computing, al Análisis Masivo de Datos y a Tópicos Emergentes, como las tecnologías 4.0, entre otros. Estas temáticas, y su integración, han adquirido creciente importancia por su aplicación en dominios de alto impacto como las ciudades inteligentes, la internet de las cosas, los sistemas de e-health y los basados en tecnologías de block-chain. Los integrantes del consorcio, provenientes mayoritariamente de Argentina, Chile y España han tenido, a lo largo de los años, diversas experiencias de trabajo conjunto que fueron consolidadas a partir de la organización y realización de las Jornadas de Cloud Computing-Big Data & Emerging Topics (JCC-BD&ET) llevadas a cabo en la Universidad Nacional de La Plata (Argentina). La constitución de este Consorcio, reafirma y formaliza estas líneas de colaboración proponiendo acciones de cooperación académica vinculadas con la formación de recursos humanos, la formulación y ejecución de proyectos conjuntos, y la vinculación con empresas y organismos relacionados con la industria informática, entre otras. Este trabajo presenta el avance del consorcio en la definición de un proyecto integrador que tiene como eje la Resiliencia para la Transformación Digital.Red de Universidades con Carreras en Informátic
    corecore