1,752 research outputs found

    Transitioning Applications to Semantic Web Services: An Automated Formal Approach

    No full text
    Semantic Web Services have been recognized as a promising technology that exhibits huge commercial potential, and attract significant attention from both industry and the research community. Despite expectations being high, the industrial take-up of Semantic Web Service technologies has been slower than expected. One of the main reasons is that many systems have been developed without considering the potential of the web in integrating services and sharing resources. Without a systematic methodology and proper tool support, the migration from legacy systems to Semantic Web Service-based systems can be a very tedious and expensive process, which carries a definite risk of failure. There is an urgent need to provide strategies which allow the migration of legacy systems to Semantic Web Services platforms, and also tools to support such a strategy. In this paper we propose a methodology for transitioning these applications to Semantic Web Services by taking the advantage of rigorous mathematical methods. Our methodology allows users to migrate their applications to Semantic Web Services platform automatically or semi-automatically

    Extraction of objects from legacy systems: an example using cobol legacy systems

    Get PDF
    In the last few years the interest in legacy information system has increased because of the escalating resources spent on their maintenance. On the other hand, the importance of extracting knowledge from business rules is becoming a crucial issue for modern business: sometime, because of inappropriate documentation, this knowledge is essentially only stored in the code. A way to improve their use and maintainability in the present environment is to migrate them into a new hardware / software platform reusing as much of their experience as possible during this process. This migration process promotes the population of a repository of reusable software components for their reuse in the development of a new system in that application domain or in the later maintenance processes. The actual trend in the migration of a legacy information system, is to exploit the potentialities of object oriented technology as a natural extension of earlier structured programming techniques. This is done by decomposing the program into several agent-like modules communicating via message passing, and providing to this system some object oriented key features. The key step is the "object isolation", i.e. the isolation of .groups of routines and related data items : to candidates in order to implement an abstraction in the application domain. The main idea of the object isolation method presented here is to extract information from the data flow, to cluster all the procedures on the base of their data accesses. It will examine "how" a procedure accesses the data in order to distinguish several types of accesses and to permit a better understanding of the functionality of the candidate objects. These candidate modules support the population of a repository of reusable software components that might be used as a basis of the process of evolution leading to a new object oriented system reusing the extracted objects

    An empirical study into COBOL type inferencing

    Get PDF
    AbstractIn a typical COBOL program, the data division consists of 50% of the lines of code. Automatic type inference can help to understand the large collections of variable declarations contained therein, showing how variables are related based on their actual usage. The most problematic aspect of type inference is pollution, the phenomenon that types become too large, and contain variables that intuitively should not belong to the same type. The aim of the paper is to provide empirical evidence for the hypothesis that the use of subtyping is an effective way for dealing with pollution. The main results include a tool set to carry out type inference experiments, a suite of metrics characterizing type inference outcomes, and the experimental observation that only one instance of pollution occurs in the case study conducted

    Types and concept analysis for legacy systems

    Get PDF
    We combine type inference and concept analysis in order to gain insight into legacy software systems. Type inference for Cobol yields the types for variables and program parameters. These types are used to perform mathematical concept analysis on legacy systems. We have developed ConceptRefinery, a tool for interactively manipulating concepts. We show how this tools facilitates experiments with concept analysis, and lets reengineers employ their knowedge of the legacy system to refine the results of concept analysis

    Acquiring data designs from existing data-intensive programs

    Get PDF
    The problem area addressed in this thesis is extraction of a data design from existing data intensive program code. The purpose of this is to help a software maintainer to understand a software system more easily because a view of a software system at a high abstraction level can be obtained. Acquiring a data design from existing data intensive program code is an important part of reverse engineering in software maintenance. A large proportion of software systems currently needing maintenance is data intensive. The research results in this thesis can be directly used in a reverse engineering tool. A method has been developed for acquiring data designs from existing data intensive programs, COBOL programs in particular. Program transformation is used as the main tool. Abstraction techniques and the method of crossing levels of abstraction are also studied for acquiring data designs. A prototype system has been implemented based on the method developed. This involved implementing a number of program transformations for data abstraction, and thus contributing to the production of a tool. Several case studies, including one case study using a real program with 7000 Hues of source code, are presented. The experiment results show that the Entity-Relationship Attribute Diagrams derived from the prototype can represent the data designs of the original data intensive programs. The original contribution of the thesis is that the approach presented in this thesis can identify and extract data relationships from the existing code by combining analysis of data with analysis of code. The approach is believed to be able to provide better capabilities than other work in the field. The method has indicated that acquiring a data design from existing data intensive program code by program transformation with human assistance is an effective method in software maintenance. Future work is suggested at the end of the thesis including extending the method to build an industrial strength tool

    Deriving tolerant grammars from a base-line grammar

    Get PDF
    A grammar-based approach to tool development in re- and reverse engineering promises precise structure awareness, but it is problematic in two respects. Firstly, it is a considerable up-front investment to obtain a grammar for a relevant language or cocktail of languages. Existing work on grammar recovery addresses this concern to some extent. Secondly, it is often not feasible to insist on a precise grammar, e.g., when different dialects need to be covered. This calls for tolerant grammars. In this paper, we provide a well-engineered approach to the derivation of tolerant grammars, which is based on previous work on error recovery, fuzzy parsing, and island grammars. The technology of this paper has been used in a complex Cobol restructuring project on several millions of lines of code in different Cobol dialects. Our approach is founded on an approximation relation between a tolerant grammar and a base-line grammar which serves as a point of reference. Thereby, we avoid false positives and false negatives when parsing constructs of interest in a tolerant mode. Our approach accomplishes the effective derivation of a tolerant grammar from the syntactical structure that is relevant for a certain re- or reverse engineering tool. To this end, the productions for the constructs of interest are reused from the base-line grammar together with further productions that are needed for completion

    The Amsterdam Toolkit for Language Archaeology

    Get PDF
    AbstractGRK — the Grammar Recovery Kit — illustrates options for automation and corresponding tool support in the context of developing quality language references that readily cater for the derivation of parsers.GRK provides the proof-of-concept for two notions: (i) semi-automatic grammar recovery; (ii) language-reference re-engineering. GRK's support for semi-automatic grammar recovery means that GRK can be used to obtain a relatively correct and complete as well as implementable grammar from a language reference. GRK's support for language-reference re-engineering means that GRK can be used to update the original language reference such that it reflects the completed and corrected grammar knowledge.As of today, GRK is particularly fit for Cobol archaeology, more specifically for IBM's VS Cobol II. That is, GRK offers a fully mechanised process, where IBM's reference is used as an input, and the output is a transformed language reference whose grammar portions are correct and complete. (The recovery required several hundreds of simple transformation steps in order to deliver a grammar that is fit for parser derivation.) As a byproduct, GRK also generates a slow, Prolog-based parser. Via export to GRK's sibling, GDK (the Grammar Deployment Kit), a reasonably fast, btyacc-based parser can be generated as well. Both parsers accept all of the VS Cobol II code that is at our avail (several millions of lines of code)

    A method for re-modularising legacy code

    Get PDF
    This thesis proposes a method for the re-modularisation of legacy COBOL. Legacy code often performs a number of functions that if split, would improve software maintainability. For instance, program comprehension would benefit from a reduction in the size of the code modules. The method aims to identify potential reuse candidates from the functions re-modularised, and to ensure clear interfaces are present between the new modules. Furthermore, functionality is often replicated across applications and so the re-modularisation process can also seek to reduce commonality and hence the overall amount of a company's code requiring maintenance. A 10 step method is devised which assembles a number of new and existing techniques into an approach suitable for use by staff not having significant reengineering experience. Three main approaches are used throughout the method; that is the analysis of the PERFORM structure, the analysis of the data, and the use of graphical representations. Both top-down and bottom-up strategies to program comprehension are incorporated within the method as are automatable, and user controlled processes to reuse candidate selection. Three industrial case studies are used to demonstrate and evaluate the method. The case studies range in size to gain an indication of the scalability of the method. The case studies are used to evaluate the method on a step by step basis; both strong points and deficiencies are identified, as well as potential solutions to the deficiencies. A review is also presented to assesses the three main approaches of the methods; the analysis of the PERFORM and data structures, and the use of graphical representations. The review uses the process of software evolution for its evaluation using successive versions of COBOL software. The method is retrospectively applied to the earliest version and the known changes identified from the following versions are used to evaluate the re-modularisations. Within the evaluation chapters a new link within the dominance tree is proposed as is an approach for dealing with multiple dominance trees. The results show that «ach approach provides an important contribution to the method as well as giving a useful insight (in the form of graphical representations) of the process of software evolution
    corecore