25 research outputs found

    Use Cases in Object-Oriented Software Development

    Get PDF

    Design and integrity of deterministic system architectures.

    Get PDF
    Architectures represented by system construction 'building block' components and interrelationships provide the structural form. This thesis addresses processes, procedures and methods that support system design synthesis and specifically the determination of the integrity of candidate architectural structures. Particular emphasis is given to the structural representation of system architectures, their consistency and functional quantification. It is a design imperative that a hierarchically decomposed structure maintains compatibility and consistency between the functional and realisation solutions. Complex systems are normally simplified by the use of hierarchical decomposition so that lower level components are precisely defined and simpler than higher-level components. To enable such systems to be reconstructed from their components, the hierarchical construction must provide vertical intra-relationship consistency, horizontal interrelationship consistency, and inter-component functional consistency. Firstly, a modified process design model is proposed that incorporates the generic structural representation of system architectures. Secondly, a system architecture design knowledge domain is proposed that enables viewpoint evaluations to be aggregated into a coherent set of domains that are both necessary and sufficient to determine the integrity of system architectures. Thirdly, four methods of structural analysis are proposed to assure the integrity of the architecture. The first enables the structural compatibility between the 'building blocks' that provide the emergent functional properties and implementation solution properties to be determined. The second enables the compatibility of the functional causality structure and the implementation causality structure to be determined. The third method provides a graphical representation of architectural structures. The fourth method uses the graphical form of structural representation to provide a technique that enables quantitative estimation of performance estimates of emergent properties for large scale or complex architectural structures. These methods have been combined into a procedure of formal design. This is a design process that, if rigorously executed, meets the requirements for reconstructability

    Measurement for the management of software maintenance

    Get PDF
    This thesis addresses the problem of bringing maintenance, in a commercial environment, under management control, and also increasing the profile of maintenance in a corporate picture, bringing it onto a par with other components of the business. This management control will help reduce costs and also the time scales inherent in maintenance activity. This objective is achieved by showing how the measurement of the products and processes involved in maintenance activity, at a team level, increases the visibility of the tasks being tackled. This increase in visibility provides the ability to impose control on the products and processes and provides the basis for prediction and estimation of future states of a project and the future requirements of the team. This is the foundation of good management. Measurement also provides an increase in visibility for higher management of the company, forming a basis for communication within the corporate strategy, allowing maintenance to be seen as it is, and furnished with the resources it requires. A method for the introduction of a measurement strategy, and collection system, is presented, supported by the examination of a database of maintenance information collected by a British Telecom research team, during a commercial software maintenance exercise. A prototype system for the collection of software change information is also presented, demonstrating the application of the method, along with the results of its development and the implications for both software maintenance management and the technical tasks of implementing change

    Quality modelling and metrics of Web-based information systems

    Get PDF
    In recent years, the World Wide Web has become a major platform for software applications. Web-based information systems have been involved in many areas of our everyday life, such as education, entertainment, business, manufacturing, communication, etc. As web-based systems are usually distributed, multimedia, interactive and cooperative, and their production processes usually follow ad-hoc approaches, the quality of web-based systems has become a major concern. Existing quality models and metrics do not fully satisfy the needs of quality management of Web-based systems. This study has applied and adapted software quality engineering methods and principles to address the following issues, a quality modeling method for derivation of quality models of Web-based information systems; and the development, implementation and validation of quality metrics of key quality attributes of Web-based information systems, which include navigability and timeliness. The quality modeling method proposed in this study has the following strengths. It is more objective and rigorous than existing approaches. The quality analysis can be conducted in the early stage of system life cycle on the design. It is easy to use and can provide insight into the improvement of the design of systems. Results of case studies demonstrated that the quality modeling method is applicable and practical. Practitioners can use the modeling method to develop their own quality models. This study is amongst the first comprehensive attempts to develop quality measurement for Web-based information systems. First, it identified the relationship between website structural complexity and navigability. Quality metrics of navigability were defined, investigated and implemented. Empirical studies were conducted to evaluate the metrics. Second, this study investigated website timeliness and attempted to find direct and indirect measures for the quality attribute. Empirical studies for validating such metrics were also conducted. This study also suggests four areas of future research that may be fruitful

    Identifying reusable functions in code using specification driven techniques

    Get PDF
    The work described in this thesis addresses the field of software reuse. Software reuse is widely considered as a way to increase the productivity and improve the quality and reliability of new software systems. Identifying, extracting and reengineering software. components which implement abstractions within existing systems is a promising cost-effective way to create reusable assets. Such a process is referred to as reuse reengineering. A reference paradigm has been defined within the RE(^2) project which decomposes a reuse reengineering process in five sequential phases. In particular, the first phase of the reference paradigm, called Candidature phase, is concerned with the analysis of source code for the identification of software components implementing abstractions and which are therefore candidate to be reused. Different candidature criteria exist for the identification of reuse-candidate software components. They can be classified in structural methods (based on structural properties of the software) and specification driven methods (that search for software components implementing a given specification).In this thesis a new specification driven candidature criterion for the identification and the extraction of code fragments implementing functional abstractions is presented. The method is driven by a formal specification of the function to be isolated (given in terms of a precondition and a post condition) and is based on the theoretical frameworks of program slicing and symbolic execution. Symbolic execution and theorem proving techniques are used to map the specification of the functional abstractions onto a slicing criterion. Once the slicing criterion has been identified the slice is isolated using algorithms based on dependence graphs. The method has been specialised for programs written in the C language. Both symbolic execution and program slicing are performed by exploiting the Combined C Graph (CCG), a fine-grained dependence based program representation that can be used for several software maintenance tasks

    Program Flow Graph Decomposition

    Get PDF
    The purpose of this thesis involved the implementation, validation, complexity analysis, and comparison of two graph decomposition approaches. The two approaches are Forman's algorithm for prime decomposition of a program flow graph, and Cunningham's approach for decomposing a program digraph into graph-oriented components. To validate the two implementations, each was tested with six inputs. Comparison of these two approaches was based on these dimensions time and space complexities, composability, repeated decomposition, and uniqueness. Forman's algorithm appears to have four advantages over Cunningham's algorithm 1. the algorithm overhead (i.e, the time and space complexities) was lower in Forman's algorithm; 2. Forman's algorithm yields a unique set of decomposed units, whereas Cunningham's does not; 3. in Forman's algorithm, reconstructing the original graph from the decomposed prime graphs results in the original graph that was decomposed, whereas in Cunningham's algorithm, the attempt at the reconstruction of the original graph from the decomposed parts does not always yield the graph that was decomposed; 4 Forman's approach can be used to decompose a graph until it is irreducible (all its part are primes), whereas in Cunningham's algorithm, the algorithm decomposes the graph only once even if it is still decomposable Thus, Forman's approach could be recommended as a program flow graph decomposition algorithm. Implementation of the decomposition techniques could help in better software comprehension and can be used in the development of some software reusability tools

    Design by Contract to Improve Software Vigilance

    Full text link

    Extraction of objects from legacy systems: an example using cobol legacy systems

    Get PDF
    In the last few years the interest in legacy information system has increased because of the escalating resources spent on their maintenance. On the other hand, the importance of extracting knowledge from business rules is becoming a crucial issue for modern business: sometime, because of inappropriate documentation, this knowledge is essentially only stored in the code. A way to improve their use and maintainability in the present environment is to migrate them into a new hardware / software platform reusing as much of their experience as possible during this process. This migration process promotes the population of a repository of reusable software components for their reuse in the development of a new system in that application domain or in the later maintenance processes. The actual trend in the migration of a legacy information system, is to exploit the potentialities of object oriented technology as a natural extension of earlier structured programming techniques. This is done by decomposing the program into several agent-like modules communicating via message passing, and providing to this system some object oriented key features. The key step is the "object isolation", i.e. the isolation of .groups of routines and related data items : to candidates in order to implement an abstraction in the application domain. The main idea of the object isolation method presented here is to extract information from the data flow, to cluster all the procedures on the base of their data accesses. It will examine "how" a procedure accesses the data in order to distinguish several types of accesses and to permit a better understanding of the functionality of the candidate objects. These candidate modules support the population of a repository of reusable software components that might be used as a basis of the process of evolution leading to a new object oriented system reusing the extracted objects

    Applying metrics to rule-based systems

    Get PDF
    Since the introduction of software measurement theory in the early seventies it has been accepted that in order to control software it must first be measured. Unambiguous and reproducible measurements are considered to be the most useful in controlling software productivity, costs and quality, and diverse sets of measurements are required to cover all aspects of software. A set of measures for a rule-based language RULER is proposed using a process which helps identify components within software that are not currently measurable, and encourages the maximum re-use of existing software measures. The initial set of measures proposed is based on a set of basic primitive counts. These measures can then be performed with the aid of a specially built prototype static analyser R-DAT Analysis of obtained results is performed to help provide tentative acceptable ranges for these measures. It is important to ensure that measurement is performed for all newly emerging development methods, both procedural and non-procedural. As software engineering continues to generate more diverse methods of system development, it is important to continually update our methods of measurement and control. This thesis demonstrates the practicality of defining and implementing new measures for rule-based systems
    corecore