57 research outputs found

    The knowledge-based software assistant

    Get PDF
    Where the Knowledge Based Software Assistant (KBSA) is now, four years after the initial report, is discussed. Also described is what the Rome Air Development Center expects at the end of the first contract iteration. What the second and third contract iterations will look like are characterized

    The Knowledge-Based Software Assistant: Beyond CASE

    Get PDF
    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level

    Tools for producing formal specifications : a view of current architectures and future directions

    Get PDF
    During the last decade, one important contribution towards requirements engineering has been the advent of formal specification languages. They offer a well-defined notation that can improve consistency and avoid ambiguity in specifications. However, the process of obtaining formal specifications that are consistent with the requirements is itself a difficult activity. Hence various researchers are developing systems that aid the transition from informal to formal specifications. The kind of problems tackled and the contributions made by these proposed systems are very diverse. This paper brings these studies together to provide a vision for future architectures that aim to aid the transition from informal to formal specifications. The new architecture, which is based on the strengths of existing studies, tackles a number of key issues in requirements engineering such as identifying ambiguities, incompleteness, and reusability. The paper concludes with a discussion of the research problems that need to be addressed in order to realise the proposed architecture

    Artificial intelligence approaches to software engineering

    Get PDF
    Artificial intelligence approaches to software engineering are examined. The software development life cycle is a sequence of not so well-defined phases. Improved techniques for developing systems have been formulated over the past 15 years, but pressure continues to attempt to reduce current costs. Software development technology seems to be standing still. The primary objective of the knowledge-based approach to software development presented in this paper is to avoid problem areas that lead to schedule slippages, cost overruns, or software products that fall short of their desired goals. Identifying and resolving software problems early, often in the phase in which they first occur, has been shown to contribute significantly to reducing risks in software development. Software development is not a mechanical process but a basic human activity. It requires clear thinking, work, and rework to be successful. The artificial intelligence approaches to software engineering presented support the software development life cycle through the use of software development techniques and methodologies in terms of changing current practices and methods. These should be replaced by better techniques that that improve the process of of software development and the quality of the resulting products. The software development process can be structured into well-defined steps, of which the interfaces are standardized, supported and checked by automated procedures that provide error detection, production of the documentation and ultimately support the actual design of complex programs

    Supporting collaboration within the eScience community

    Get PDF
    Collaboration is a core activity at the heart of large-scale co- operative scientific experimentation. In order to support the emergence of Grid-based scientific collaboration, new models of e-Science working methods are needed. Scientific collaboration involves production and manipulation of various artefacts. Based on work done in the software engineering field, this paper proposes models and tools which will support the representation and production of such artefacts. It is necessary to provide facilities to classify, organise, acquire, process, share, and reuse artefacts generated during collaborative working. The concept of a "design space" will be used to organise scientific design and the composition of experiments, and methods such as self-organising maps will be used to support the reuse of existing artefacts. It is proposed that this work can be carried out and evaluated in the UK e-Science community, using an "industry as laboratory" approach to the research, building on the knowledge, expertise, and experience of those directly involved in e-Science

    ARIES: Acquisition of Requirements and Incremental Evolution of Specifications

    Get PDF
    This paper describes a requirements/specification environment specifically designed for large-scale software systems. This environment is called ARIES (Acquisition of Requirements and Incremental Evolution of Specifications). ARIES provides assistance to requirements analysts for developing operational specifications of systems. This development begins with the acquisition of informal system requirements. The requirements are then formalized and gradually elaborated (transformed) into formal and complete specifications. ARIES provides guidance to the user in validating formal requirements by translating them into natural language representations and graphical diagrams. ARIES also provides ways of analyzing the specification to ensure that it is correct, e.g., testing the specification against a running simulation of the system to be built. Another important ARIES feature, especially when developing large systems, is the sharing and reuse of requirements knowledge. This leads to much less duplication of effort. ARIES combines all of its features in a single environment that makes the process of capturing a formal specification quicker and easier

    Web-based support for managing large collections of software artefacts

    Get PDF
    There has been a long history of CASE tool development, with an underlying software repository at the heart of most systems. Usually such tools, even the more recently web-based systems, are focused on supporting individual projects within an enterprise or across a number of distributed sites. Little support for maintaining large heterogeneous collections of software artefacts across a number of projects has been developed. Within the GENESIS project, this has been a key consideration in the development of the Open Source Component Artefact Repository (OSCAR). Its most recent extensions are explicitly addressing the provision of cross project global views of large software collections as well as historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR is widely adopted and various steps to facilitate this are described

    Testing validation tools on CLIPS-based expert systems

    Get PDF
    The Expert Systems Validation Associate (EVA) is a validation system which was developed at the Lockheed Software Technology Center and Artificial Intelligence Center between 1986 and 1990. EVA is an integrated set of generic tools to validate any knowledge-based system written in any expert system shell such as C Language Integrated Production System (CLIPS), ART, OPS5, KEE, and others. Many validation tools have been built in the EVA system. In this paper, we describe the testing results of applying the EVA validation tools to the Manned Maneuvering Unit (MMU) Fault Diagnosis, Isolation, and Reconfiguration (FDIR) expert system, written in CLIPS, obtained from the NASA Johnson Space Center

    An analysis of the requirements traceability problem

    Get PDF
    In this paper1, we investigate and discuss the underlying nature of the requirements traceability problem. Our work is based on empirical studies, involving over 100 practitioners, and an evaluation of current support. We introduce the distinction between pre-requirements specification (pre-RS) traceability and post-requirements specification (post-RS) traceability, to demonstrate why an all-encompassing solution to the problem is unlikely, and to provide a framework through which to understand its multifaceted nature. We report how the majority of the problems attributed to poor requirements traceability are due to inadequate pre-RS traceability and show the fundamental need for improvements here. In the remainder of the paper, we present an analysis of the main barriers confronting such improvements in practice, identify relevant areas in which advances have been (or can be) made, and make recommendations for research
    corecore