18 research outputs found
Collected Software Engineering Papers, Volume 10
This document is a collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) from Oct. 1991 - Nov. 1992. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. Additional information about the SEL and its research efforts may be obtained from the sources listed in the bibliography at the end of this document. For the convenience of this presentation, the 11 papers contained here are grouped into 5 major sections: (1) the Software Engineering Laboratory; (2) software tools studies; (3) software models studies; (4) software measurement studies; and (5) Ada technology studies
OUTPUT MEASUREMENT METRICS IN AN OBJECT-ORIENTED COMPUTER AIDED SOFTWARE ENGINEERING (CASE) ENVIRONMENT: CRITIQUE, EVALUATION AND PROPOSAL
Output measurement metrics for the software
development process need to be re-examined to
determine their performance in the new, radically
changed CASE development environment. This paper
critiques and empirically evaluates several approaches
to the measurement of outputs from the CASE process.
The primary metric evaluated is the function points
method developed by Albrecht. A second metric
tested is a short-form variation of function points that
is easier and quicker to calculate. We also propose a
new output metric called object points and a related
short-form, which are specialized for output
measurement in object-oriented CASE environments
that include a central object repository. These metrics
are proposed as more intuitive and lower cost
approaches to measuring the CASE outputs. Our
preliminary results show that these metrics have the
potential to yield as accurate, if not better, estimates
than function points-based measures.Information Systems Working Papers Serie
Proceedings of the Fifteenth Annual Software Engineering Workshop
The Software Engineering Laboratory (SEL) is an organization sponsored by GSFC and created for the purpose of investigating the effectiveness of software engineering technologies when applied to the development of applications software. The goals of the SEL are: (1) to understand the software development process in the GSFC environment; (2) to measure the effect of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. Fifteen papers were presented at the Fifteenth Annual Software Engineering Workshop in five sessions: (1) SEL at age fifteen; (2) process improvement; (3) measurement; (4) reuse; and (5) process assessment. The sessions were followed by two panel discussions: (1) experiences in implementing an effective measurement program; and (2) software engineering in the 1980's. A summary of the presentations and panel discussions is given
Software Engineering Laboratory Series: Collected Software Engineering Papers
The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document
Software Engineering Laboratory Series: Proceedings of the Twenty-Second Annual Software Engineering Workshop
The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document
Software Engineering Laboratory Series: Proceedings of the Twentieth Annual Software Engineering Workshop
The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of application software. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that includes this document
Joint University Program for Air Transportation Research, 1990-1991
The goals of this program are consistent with the interests of both NASA and the FAA in furthering the safety and efficiency of the National Airspace System. Research carried out at the Massachusetts Institute of Technology (MIT), Ohio University, and Princeton University are covered. Topics studied include passive infrared ice detection for helicopters, the cockpit display of hazardous windshear information, fault detection and isolation for multisensor navigation systems, neural networks for aircraft system identification, and intelligent failure tolerant control
Recommended from our members
Guidelines for the verification and validation of expert system software and conventional software: Rationale and description of V&V guideline packages and procedures. Volume 5
This report is the fifth volume in a series of reports describing the results of the Expert System Verification C, and Validation (V&V) project which is jointly funded by the U.S. Nuclear Regulatory Commission and the Electric Power Research Institute toward the objective of formulating Guidelines for the V&V of expert systems for use in nuclear power applications. This report provides the rationale for and description of those guidelines. The actual guidelines themselves are presented in Volume 7, {open_quotes}User`s Manual.{close_quotes} Three factors determine what V&V is needed: (1) the stage of the development life cycle (requirements, design, or implementation); (2) whether the overall system or a specialized component needs to be tested (knowledge base component, inference engine or other highly reusable element, or a component involving conventional software); and (3) the stringency of V&V that is needed (as judged from an assessment of the system`s complexity and the requirement for its integrity to form three Classes). A V&V Guideline package is provided for each of the combinations of these three variables. The package specifies the V&V methods recommended and the order in which they should be administered, the assurances each method provides, the qualifications needed by the V&V team to employ each particular method, the degree to which the methods should be applied, the performance measures that should be taken, and the decision criteria for accepting, conditionally accepting, or rejecting an evaluated system. In addition to the Guideline packages, highly detailed step-by-step procedures are provided for 11 of the more important methods, to ensure that they can be implemented correctly. The Guidelines can apply to conventional procedural software systems as well as all kinds of Al systems
Avaliação da construção e uso de classes autotestaveis
Orientador: Eliane MartinsDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Esta dissertação descreve uma metodologia para a construção e uso de classes autotestáveis. A metodologia busca melhorar a testabilidade de Sistemas Orientados a Objetos, através da adaptação de conceitos consolidados em hardware, como o Projeto para Testabilidade (DFT, do inglês Design for Testability) e o conceito de autoteste, que significa a adição de estruturas especiais a componentes para permitir que testes sejam gerados e avaliados internamente. Além do conceito do autoteste, nosso trabalho faz uso da hierarquia de herança existente para permitir também a reutilização dos testes. Neste sentido, a Técnica Incremental Hierárquica (HIT) [Har92] é usada com o objetivo de permitir que os testes de uma superclasse possam ser reusados para uma subclasse. As vantagens e desvantagens do uso da metodologia são mostradas na dissertação de duas formas: a primeira através de uma apresentação dos passos para a construção e uso de classes autotestáveis baseada nos testes de classes da biblioteca MFC (Microsoft Foundation Classes); e a segunda através do uso da metodologia nos testes de um conjunto de classes de uma aplicação real. Esta dissertação também apresenta uma avaliação empírica feita para determinar se os testes gerados neste trabalho têm um bom potencial para encontrar falhas. Os resultados mostraram que os testes gerados possuem um bom potencial na detecção de falhas. Nossos resultados ainda não oferecem uma evidência definitiva sobre a eficácia do conjunto de requisitos de teste gerado, porém mostram que a estratégia de teste adotada pode ser útil nos testes de classes de sistemas 00.Abstract: This thesis describes a methodology for building and using self-testing classes. The methodology aims to improve the testability of Object Oriented (00) Systems, through the adaptation of concepts consolidated in hardware, as the Design for Testability (DFT) and the self-testing concepts, which means the addition of special structures to the components to allow tests to be generated and evaluated internally. In addition to the self- testing concept, our work makes use of the existing inheritance hierarchy allowing also the tests reuse. In this direction, the Hierarchical Incremental Testing (HIT) approach, proposed in [Har92], allows a test sequence for a parent class to be reused, whenever possible, when testing one of its subclasses. The advantages and disadvantages of the methodology are shown in the thesis in two ways: (1) a presentation of steps that should be carried out in building and using a self- testing class, based on classes from the Microsoft Foundation Class (MFC); and (2) the methodology use in testing a set of classes from a real application. This thesis also presents an empirical evaluation to determine if the tests generated in this work have a good potential for finding faults. The results have shown that the generated tests have a good potential in fault detention. Our results still do not offer a definitive evidence on the effectiveness of a set of generated test cases; however, they show that the adopted strategy of test can be useful in the 00 systems classes tests.MestradoMestre em Ciência da Computaçã