26 research outputs found

    Legacy Software Restructuring: Analyzing a Concrete Case

    Get PDF
    Software re-modularization is an old preoccupation of reverse engineering research. The advantages of a well structured or modularized system are well known. Yet after so much time and efforts, the field seems unable to come up with solutions that make a clear difference in practice. Recently, some researchers started to question whether some basic assumptions of the field were not overrated. The main one consists in evaluating the high-cohesion/low-coupling dogma with metrics of unknown relevance. In this paper, we study a real structuring case (on the Eclipse platform) to try to better understand if (some) existing metrics would have helped the software engineers in the task. Results show that the cohesion and coupling metrics used in the experiment did not behave as expected and would probably not have helped the maintainers reach there goal. We also measured another possible restructuring which is to decrease the number of cyclic dependencies between modules. Again, the results did not meet expectations

    Optimizing decomposition of software architecture for local recovery

    Get PDF
    Cataloged from PDF version of article.The increasing size and complexity of software systems has led to an amplified number of potential failures and as such makes it harder to ensure software reliability. Since it is usually hard to prevent all the failures, fault tolerance techniques have become more important. An essential element of fault tolerance is the recovery from failures. Local recovery is an effective approach whereby only the erroneous parts of the system are recovered while the other parts remain available. For achieving local recovery, the architecture needs to be decomposed into separate units that can be recovered in isolation. Usually, there are many different alternative ways to decompose the system into recoverable units. It appears that each of these decomposition alternatives performs differently with respect to availability and performance metrics. We propose a systematic approach dedicated to optimizing the decomposition of software architecture for local recovery. The approach provides systematic guidelines to depict the design space of the possible decomposition alternatives, to reduce the design space with respect to domain and stakeholder constraints and to balance the feasible alternatives with respect to availability and performance. The approach is supported by an integrated set of tools and illustrated for the open-source MPlayer software

    Aide à l'Évolution Logicielle dans les Organisations

    Get PDF
    Software systems are now so intrinsically part of our lives that we don't see them any more. They run our phones, our cars, our leisures, our banks, our shops, our cities … This brings a significant burden on the software industry. All these systems need to be updated, corrected, and enhanced as the users and consumers have new needs. As a result, most of the software engineering activity may be classified as Software Maintenance, “the totality of activities required to provide cost-effective support to a software system”.In an ecosystem where processing power for computers, and many other relevant metrics such as disk capacity or network bandwidth, doubles every 18 months (“Moore's Law”), technologies evolve at a fast pace. In this ecosystem, software maintenance suffers from the drawback of having to address the past (past languages, existing systems, old technologies). It is often ill-perceived, and treated as a punishment. Because of this, solutions and tools for software maintenance have long lagged far behind those for new software development. For example, the antique approach of manually inserting traces in the source code to understand the execution path is still a very valid one.All my research activity focused on helping people to do software maintenance in better conditions or more efficiently. An holistic approach of the problem must consider the software that has to be maintained, the people doing it, and the organization in which and for which it is done. As such, I studied different facets of the problem that will be presented in three parts in this document: Software: The source code is the center piece of the maintenance activity. Whatever the task (ex: enhancement or bug correction), it typically comes down to understand the current source code and find out what to change and/or add to make it behave as expected. I studied how to monitor the evolution of the source code, how to prevent it's decaying and how to remedy bad situations; People: One of the fundamental asset of people dealing with maintenance is the knowledge they have, of computer science (programming techniques), of the application domain, of the software itself. It is highly significant that from 40% to 60% of software maintenance time is spent reading the code to understand what it does, how it does it, how it can be changed; Organization: Organizations may have a strong impact on the way activities such as software maintenance are performed by their individual members. The support offered within the organization, the constraints they impose, the cultural environment, all affect how easy or difficult it can be to do the tasks and therefore how well or badly they can be done. I studied some software maintenance processes that organizations use.In this document, the various research topics I addressed, are organized in a logical way that does not always respect the chronological order of events. I wished to highlight, not only the results of the research, through the publications that attest to them, but also the collaborations that made them possible, collaboration with students or fellow researchers. For each result presented here, I tried to summarize as much as possible the discussion of the previous state of the art and the result itself. First because, more details can easily be found in the referenced publications, but also because some of this research is quite old and sometimes fell in the realm of “common sense”.Les systèmes logiciels font maintenant partie intrinsèque de nos vies à tel point que nous ne les voyons plus. Ils pilotent nos téléphones, nos voitures, nos loisirs, nos banques, nos magasins, nos villes, … Cela apporte de lourdes contraintes à l'industrie du logiciel car tous ces systèmes doivent être continuellement mis à jour, corrigés, étendus quand les utilisateurs et consommateurs expriment de nouveaux besoins. Le résultat en est que la plus grande part de l'activité de génie logiciel peut être classifié comme de la Maintenance Logicielle, « La totalité des activités requises pour fournir un support efficient d'un système logiciel ».Dans un écosystème où la puissance de calcul des ordinateurs, ou beaucoup d'autres métriques reliées comme la capacité des disques, ou le débit des réseaux, double tous les 18 mois (« loi de Moore »), les technologies évoluent rapidement. Dans cet écosystème la maintenance logiciel souffre de l'inconvénient de devoir traiter le passé (langages du passé, systèmes existants, vielles technologies). Elle est souvent mal perçue et traitée comme une punition. À cause de cela, les solutions et les outils pour la maintenance logicielle sont depuis toujours très en retard sur ceux pour le développement. Par exemple, l'antique méthode de correction de problème consistant à insérer des instructions pour retracer et comprendre le flot d'exécution d'un programme est toujours complètement actuelle.Toute mon activité de recherche s'est concentrée à aider les gens à faire de la maintenance logicielle dans de meilleures conditions ou plus efficacement. Une approche holistique du problème doit considérer le logiciel qui doit être maintenu, les gens qui le font et les organisations dans lesquelles et pour lesquelles cela est fait. Pour cela, j'ai étudié différents aspects du problème qui seront présentés en trois parties dans ce document : Le Logiciel : Le code source est la pièce centrale de l'activité de maintenance logicielle. Quelque soit la tâche (ex : amélioration ou correction d'erreur), elle revient typiquement à comprendre le code source actuel pour trouver quoi changer et/ou ajouter pour obtenir le comportement souhaité. J'ai étudié comment contrôler l'évolution du code source, comment prévenir son délitement et comment remédier à des mauvaises situations ; les Gens : L'un des principaux avantages des personnes qui traitent de maintenance logicielle est les connaissances qu'elles ont de l'informatique (techniques de programmation), du domaine d'application, du logiciel lui-même. Il est très significatif que de 40 % à 60 % du temps de maintenance logicielle soit passé à lire le code pour comprendre ce qu'il fait, comment il le fait et comment il peut-être changé ; les Organisations : Elles peuvent avoir un profond impact sur la façon dont des activités comme la maintenance logicielle sont exécutées par les individus. Le support offert à l'intérieur des organisations, ou les contraintes qu'elles imposent, l’environnement culturel, peuvent tous affecter la facilité ou difficulté de ces tâches et donc la qualité qui en résultera. J'ai étudié quelques processus liés à la maintenance logicielle utilisés dans les organisations.Dans ce document, les différents sujets de recherche que j'ai considéré sont présentés de façon logique qui ne respecte pas toujours l'ordre chronologique des évènements. J'ai souhaité aussi mettre en valeur, non uniquement les résultats scientifiques de mes recherches, au travers des publications qui les attestent, mais aussi les collaborations qui les ont rendus possibles, collaborations avec des étudiants ou des collègues chercheurs. Pour chaque résultat présenté ici, j'ai tenté de résumer le plus possible les discussions sur l'état de l'art antérieur et les résultats eux-mêmes. Ce d'abord parce que de plus amples détails peuvent facilement être trouvés dans les publications citées, mais aussi parce que une part de cette recherche est maintenant vielle et peux parfois être tombé dans le domaine du « sens commun »

    Optimizing decomposition of software architecture for local recovery

    Get PDF
    The increasing size and complexity of software systems has led to an amplified number of potential failures and as such makes it harder to ensure software reliability. Since it is usually hard to prevent all the failures, fault tolerance techniques have become more important. An essential element of fault tolerance is the recovery from failures. Local recovery is an effective approach whereby only the erroneous parts of the system are recovered while the other parts remain available. For achieving local recovery, the architecture needs to be decomposed into separate units that can be recovered in isolation. Usually, there are many different alternative ways to decompose the system into recoverable units. It appears that each of these decomposition alternatives performs differently with respect to availability and performance metrics. We propose a systematic approach dedicated to optimizing the decomposition of software architecture for local recovery. The approach provides systematic guidelines to depict the design space of the possible decomposition alternatives, to reduce the design space with respect to domain and stakeholder constraints and to balance the feasible alternatives with respect to availability and performance. The approach is supported by an integrated set of tools and illustrated for the open-source MPlayer software. © 2011 Springer Science+Business Media, LLC

    Search-Based Information Systems Migration: Case Studies on Refactoring Model Transformations

    Full text link
    Information systems are built to last for decades; however, the reality suggests otherwise. Companies are often pushed to modernize their systems to reduce costs, meet new policies, improve the security, or to be more competitive. Model-driven engineering (MDE) approaches are used in several successful projects to migrate systems. MDE raises the level of abstraction for complex systems by relying on models as first-class entities. These models are maintained and transformed using model transformations (MT), which are expressed by means of transformation rules to transform models from source to target meta-models. The migration process for information systems may take years for large systems. Thus, many changes are going to be introduced to the transformations to reflect the new business requirements, fix bugs, or to meet the updated metamodels. Therefore, the quality of MT should be continually checked and improved during the evolution process to avoid future technical debts. Most MT programs are written as one large module due to the lack of refactoring/modularization and regression testing tools support. In object-oriented systems, composition and modularization are used to tackle the issues of maintainability and testability. Moreover, refactoring is used to improve the non-functional attributes of the software, making it easier and faster for developers to work and manipulate the code. Thus, we proposed an intelligent computational search approach to automatically modularize MT. Furthermore, we took inspiration from a well-defined quality assessment model for object-oriented design to propose a quality assessment model for MT in particular. The results showed a 45% improvement in the developer’s speed to detect or fix bugs, and developers made 40% less errors when performing a task with the optimized version. Since refactoring operations changes the transformation, it is important to apply regression testing to check their correctness and robustness. Thus, we proposed a multi-objective test case selection technique to find the best trade-off between coverage and computational cost. Results showed a drastic speed-up of the testing process while still showing a good testing performance. The survey with practitioners highlighted the need of such maintenance and evolution framework to improve the quality and efficiency of the existing migration process.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/149153/1/Bader Alkhazi Final Dissertation.pdfDescription of Bader Alkhazi Final Dissertation.pdf : Restricted to UM users only

    Coherent Dependence Cluster

    Get PDF
    This thesis introduces coherent dependence clusters and shows their relevance in areas of software engineering such as program comprehension and mainte- nance. All statements in a coherent dependence cluster depend upon the same set of statements and affect the same set of statements; a coherent cluster’s statements have ‘coherent’ shared backward and forward dependence. We introduce an approximation to efficiently locate coherent clusters and show that its precision significantly improves over previous approximations. Our empirical study also finds that, despite their tight coherence constraints, coherent dependence clusters are to be found in abundance in production code. Studying patterns of clustering in several open-source and industrial programs reveal that most contain multiple significant coherent clusters. A series of case studies reveal that large clusters map to logical functionality and pro- gram structure. Cluster visualisation also reveals subtle deficiencies of program structure and identify potential candidates for refactoring efforts. Supplemen- tary studies of inter-cluster dependence is presented where identification of coherent clusters can help in deriving hierarchical system decomposition for reverse engineering purposes. Furthermore, studies of program faults find no link between existence of coherent clusters and software bugs. Rather, a longi- tudinal study of several systems find that coherent clusters represent the core architecture of programs during system evolution. Due to the inherent conservativeness of static analysis, it is possible for unreachable code and code implementing cross-cutting concerns such as error- handling and debugging to link clusters together. This thesis studies their effect on dependence clusters by using coverage information to remove unexecuted and rarely executed code. Empirical evaluation reveals that code reduction yields smaller slices and clusters

    Reengineering of Legacy Systems to Distributed Environments.

    Get PDF
    The object-oriented paradigm and client/server and distributed technologies have become widely used in the last decade. There is an increasing interest to migrate and reengineer legacy systems to these new hardware technologies and software development paradigms. Software engineers who wish to reengineer such legacy systems face challenges, such as lack of documentation and programs that are difficult to comprehend. Middleware technologies such as CORBA and DCOM make the development of new distributed systems, as well as the migration of legacy systems to distributed platforms, more feasible. Distribution of a system consists of two parts: (1) subsystem decomposition and (2) allocation of the subsystems to different sites. In this research, we define a reengineering environment that assists with the migration of legacy systems to distributed environments. We define a reengineering methodology that uses reverse engineering, software metrics, clustering, and data mining to migrate legacy systems to object-based distributed environments. The reengineering environment includes the methodology and an integrated set of tools that support the implementation of the methodology. The methodology consists of multiple phases. First, we use reverse engineering techniques for program comprehension and design recovery. We then decompose the system into a hierarchy of subsystems by defining relationships between the entities of the underlying paradigm of the legacy system. The decomposition is driven by data mining, software metrics, and clustering techniques. Next, if the underlying paradigm of the legacy system is not object-based, we perform object-based adaptations on the subsystems. We then create components by wrapping objects and defining an interface. Finally, we allocate components to different sites by specifying the requirements of the system and characteristics of the network as an integer-programming model that minimizes the remote communication. We use middleware technologies for the implementation of the distributed object system
    corecore