2,782 research outputs found

    The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    Get PDF
    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage

    COBOL systems migration to SOA: Assessing antipatterns and complexity

    Get PDF
    SOA and Web Services allow users to easily expose business functions to build larger distributed systems. However, legacy systems - mostly in COBOL - are left aside unless applying a migration approach. The main approaches are direct and indirect migration. The former implies wrapping COBOL programs with a thin layer of a Web Service oriented language/platform. The latter needs reengineering COBOL functions to a modern language/ platform. In our previous work, we presented an intermediate approach based on direct migration where developed Web Services are later refactored to improve the quality of their interfaces. Refactorings mainly capture good practices inherent to indirect migration. For this, antipatterns for WSDL documents (common bad practices) are detected to prevent issues related to WSDLs understanding and discoverability. In this paper, we assess antipatterns of Web Services’ WSDL documents generated upon the three migration approaches. In addition, generated Web Services’ interfaces are measured in complexity to attend both comprehension and interoperability. We apply a metric suite (by Baski & Misra) to measure complexity on services interfaces - i.e., WSDL documents. Migrations of two real COBOL systems upon the three approaches were assessed on antipatterns evidences and the complexity level of the generated SOA frontiers - a total of 431 WSDL documents.Fil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; ArgentinaFil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; ArgentinaFil: Flores, Andrés Pablo. Universidad Nacional del Comahue. Facultad de Informática. Departamento Ingeniería de Sistemas; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Norte; ArgentinaFil: Misra, Sanjay. Atilim University; Turquía. Covenant University; Nigeri

    Structured Programming - Retrospect and Prospect

    Get PDF

    A methodology for producing reliable software, volume 1

    Get PDF
    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software

    Expert systems and AI-based decision support in auditing: Progress and perspectives

    Get PDF
    https://egrove.olemiss.edu/dl_proceedings/1083/thumbnail.jp

    Obsolescence in IT Work: Causes, Consequences and Counter-Measures

    Get PDF

    Complex adaptive systems based data integration : theory and applications

    Get PDF
    Data Definition Languages (DDLs) have been created and used to represent data in programming languages and in database dictionaries. This representation includes descriptions in the form of data fields and relations in the form of a hierarchy, with the common exception of relational databases where relations are flat. Network computing created an environment that enables relatively easy and inexpensive exchange of data. What followed was the creation of new DDLs claiming better support for automatic data integration. It is uncertain from the literature if any real progress has been made toward achieving an ideal state or limit condition of automatic data integration. This research asserts that difficulties in accomplishing integration are indicative of socio-cultural systems in general and are caused by some measurable attributes common in DDLs. This research’s main contributions are: (1) a theory of data integration requirements to fully support automatic data integration from autonomous heterogeneous data sources; (2) the identification of measurable related abstract attributes (Variety, Tension, and Entropy); (3) the development of tools to measure them. The research uses a multi-theoretic lens to define and articulate these attributes and their measurements. The proposed theory is founded on the Law of Requisite Variety, Information Theory, Complex Adaptive Systems (CAS) theory, Sowa’s Meaning Preservation framework and Zipf distributions of words and meanings. Using the theory, the attributes, and their measures, this research proposes a framework for objectively evaluating the suitability of any data definition language with respect to degrees of automatic data integration. This research uses thirteen data structures constructed with various DDLs from the 1960\u27s to date. No DDL examined (and therefore no DDL similar to those examined) is designed to satisfy the law of requisite variety. No DDL examined is designed to support CAS evolutionary processes that could result in fully automated integration of heterogeneous data sources. There is no significant difference in measures of Variety, Tension, and Entropy among DDLs investigated in this research. A direction to overcome the common limitations discovered in this research is suggested and tested by proposing GlossoMote, a theoretical mathematically sound description language that satisfies the data integration theory requirements. The DDL, named GlossoMote, is not merely a new syntax, it is a drastic departure from existing DDL constructs. The feasibility of the approach is demonstrated with a small scale experiment and evaluated using the proposed assessment framework and other means. The promising results require additional research to evaluate GlossoMote’s approach commercial use potential

    Achieving quality improvement through understanding and evaluating information systems development methodologies

    Get PDF
    Since the 70s literally hundreds of different methods and tools have appeared each claiming to ease the life of the developer and the user by achieving improved productivity without compromising the quality of the software artefact. These methodologies range from integrated collections of procedures to single technique, notations, 4GLs and tools for supporting the process at the various stages of the systems lifecycle [1, 2, 3, 4]. This paper discusses how an organisation wishing to improve their development practices embarks onto the time-consuming and expensive process of evaluation of methods and tools. The underlying complexity and application domain will themselves be decisive in the choice of methodology. The improvement process starts with the understanding phase. Here, we need to identify the important features of a methodology such as usability, portability, adaptability and functionality, and the nature of the problem(s) the methodology will apply to. We draw the distinction between problem, methodology (procedures, techniques) and tools and discuss their interrelationships [4, 7]. The evaluation phase starts with the specification of acceptance criteria and it involves the study of the features identified during the understanding phase against these criteria. Evaluations can be qualitative and quantitative
    • …
    corecore