199 research outputs found

    An automated wrapper-based approach to the design of dependable software

    Get PDF
    The design of dependable software systems invariably comprises two main activities: (i) the design of dependability mechanisms, and (ii) the location of dependability mechanisms. It has been shown that these activities are intrinsically difficult. In this paper we propose an automated wrapper-based methodology to circumvent the problems associated with the design and location of dependability mechanisms. To achieve this we replicate important variables so that they can be used as part of standard, efficient dependability mechanisms. These well-understood mechanisms are then deployed in all relevant locations. To validate the proposed methodology we apply it to three complex software systems, evaluating the dependability enhancement and execution overhead in each case. The results generated demonstrate that the system failure rate of a wrapped software system can be several orders of magnitude lower than that of an unwrapped equivalent

    Software Product Line Engineering via Software Transplantation

    Full text link
    For companies producing related products, a Software Product Line (SPL) is a software reuse method that improves time-to-market and software quality, achieving substantial cost reductions.These benefits do not come for free. It often takes years to re-architect and re-engineer a codebase to support SPL and, once adopted, it must be maintained. Current SPL practice relies on a collection of tools, tailored for different reengineering phases, whose output developers must coordinate and integrate. We present Foundry, a general automated approach for leveraging software transplantation to speed conversion to and maintenance of SPL. Foundry facilitates feature extraction and migration. It can efficiently, repeatedly, transplant a sequence of features, implemented in multiple files. We used Foundry to create two valid product lines that integrate features from three real-world systems in an automated way. Moreover, we conducted an experiment comparing Foundry's feature migration with manual effort. We show that Foundry automatically migrated features across codebases 4.8 times faster, on average, than the average time a group of SPL experts took to accomplish the task

    A Model for Transforming Legacy Systems in a client/Server Environment Based on the Unified Modeling Language(UML)

    Get PDF
    In this dissertation the researcher developed the methodology for the migration of computer programs from a legacy architecture to client/server architecture. System migrations have failed frequently, and even so-called successful migrations may have serious usability problems. Additional difficulties include missing documentation of the existing program(s), the persons who developed the existing system are not available for consultation, and, frequently, there are important operational and economic issues that must be considered. The client/server environment is quite different from the source environment; the operating system and implementation languages have changed, and system requirements may have been greatly expanded, frequently including the Internet. User interface equipment and techniques are more comprehensive, system response times may be more demanding, significant software system components may be purchased instead of developed in-house, and other elements of the operating theater may be either entirely new or greatly revised. The methodology for developing systems has evolved significantly. In order to make use of the advantages of client/server equipment, new concepts will need to be embodied in the migrated program, such as the use of middleware, object technology to permit the development of higher quality software, and the separation of functionality into server-side and client-side procedures. This dissertation identifies those factors that most critically affect the possibility of success or of failure in the migration. These factors will make it possible to lessen or eliminate the potential for failure. In addition, this dissertation will provide a model for the conversion of legacy systems to more reliable and scalable client/server systems. For this dissertation, the researcher gathered published material relating to the migration of computer systems from one hardware/software platform to a second. Some of the material discussed the conversion process itself. Other material described successes, failures, general techniques and approaches to the migration. Still others discussed nontechnical aspects, including the creation of migration teams and user training. From this material, the most pertinent factors were identified, and from them, a plan of success was developed. That plan of success is this dissertation

    Business rules based legacy system evolution towards service-oriented architecture.

    Get PDF
    Enterprises can be empowered to live up to the potential of becoming dynamic, agile and real-time. Service orientation is emerging from the amalgamation of a number of key business, technology and cultural developments. Three essential trends in particular are coming together to create a new revolutionary breed of enterprise, the service-oriented enterprise (SOE): (1) the continuous performance management of the enterprise; (2) the emergence of business process management; and (3) advances in the standards-based service-oriented infrastructures. This thesis focuses on this emerging three-layered architecture that builds on a service-oriented architecture framework, with a process layer that brings technology and business together, and a corporate performance layer that continually monitors and improves the performance indicators of global enterprises provides a novel framework for the business context in which to apply the important technical idea of service orientation and moves it from being an interesting tool for engineers to a vehicle for business managers to fundamentally improve their businesses

    Applying model-driven engineering in small software enterprises

    Get PDF
    International audienceModel-Driven Engineering (MDE) is increasingly gaining acceptance in the software engineering community, however its adoption by the industry is far from successful. The number of companies applying MDE is still very limited. Although several case studies and reports have been published on MDE adoption in large companies, experience reports on small enterprises are still rare, despite the fact that they represent a large part of the software companies ecosystem. In this paper we report on our practical experience in two transfer of technology projects on two small companies. In order to determine the degree of success of these projects we present some factors that have to be taken into account in transfer of technology projects. Then, we assess both projects analyzing these factors and applying some metrics to give hints about the potential productivity gains that MDE could bring. We also comment on some lessons learned. These experiences suggest that MDE has the potential to make small companies more competitive, because it enables them to build powerful automation tools at modest cost. We will also present the approach followed to train these companies in MDE, and we contribute the teaching material so that it can be used or adapted by others projects of this nature

    Extraction of objects from legacy systems: an example using cobol legacy systems

    Get PDF
    In the last few years the interest in legacy information system has increased because of the escalating resources spent on their maintenance. On the other hand, the importance of extracting knowledge from business rules is becoming a crucial issue for modern business: sometime, because of inappropriate documentation, this knowledge is essentially only stored in the code. A way to improve their use and maintainability in the present environment is to migrate them into a new hardware / software platform reusing as much of their experience as possible during this process. This migration process promotes the population of a repository of reusable software components for their reuse in the development of a new system in that application domain or in the later maintenance processes. The actual trend in the migration of a legacy information system, is to exploit the potentialities of object oriented technology as a natural extension of earlier structured programming techniques. This is done by decomposing the program into several agent-like modules communicating via message passing, and providing to this system some object oriented key features. The key step is the "object isolation", i.e. the isolation of .groups of routines and related data items : to candidates in order to implement an abstraction in the application domain. The main idea of the object isolation method presented here is to extract information from the data flow, to cluster all the procedures on the base of their data accesses. It will examine "how" a procedure accesses the data in order to distinguish several types of accesses and to permit a better understanding of the functionality of the candidate objects. These candidate modules support the population of a repository of reusable software components that might be used as a basis of the process of evolution leading to a new object oriented system reusing the extracted objects

    Augmenting applications with hyper media, functionality and meta-information

    Get PDF
    The Dynamic Hypermedia Engine (DHE) enhances analytical applications by adding relationships, semantics and other metadata to the application\u27s output and user interface. DHE also provides additional hypermedia navigational, structural and annotation functionality. These features allow application developers and users to add guided tours, personal links and sharable annotations, among other features, into applications. DHE runs as a middleware between the application user interface and its business logic and processes, in a n-tier architecture, supporting the extra functionalities without altering the original systems by means of application wrappers. DHE automatically generates links at run-time for each of those elements having relationships and metadata. Such elements are previously identified using a Relation Navigation Analysis. DHE also constructs more sophisticated navigation techniques not often found on the Web on top of these links. The metadata, links, navigation and annotation features supplement the application\u27s primary functionality. This research identifies element types, or classes , in the application displays. A mapping rule encodes each relationship found between two elements of interest at the class level . When the user selects a particular element, DHE instantiates the commands included in the rules with the actual instance selected and sends them to the appropriate destination system, which then dynamically generates the resulting virtual (i.e. not previously stored) page. DHE executes concurrently with these applications, providing automated link generation and other hypermedia functionality. DHE uses the extensible Markup Language (XMQ -and related World Wide Web Consortium (W3C) sets of XML recommendations, like Xlink, XML Schema, and RDF -to encode the semantic information required for the operation of the extra hypermedia features, and for the transmission of messages between the engine modules and applications. DHE is the only approach we know that provides automated linking and metadata services in a generic manner, based on the application semantics, without altering the applications. DHE will also work with non-Web systems. The results of this work could also be extended to other research areas, such as link ranking and filtering, automatic link generation as the result of a search query, metadata collection and support, virtual document management, hypermedia functionality on the Web, adaptive and collaborative hypermedia, web engineering, and the semantic Web
    corecore