13 research outputs found

    Log Event Filtering Using Clustering Techniques

    Get PDF
    Large software systems are composed of various different run-time components, partner applications and, processes. When such systems operate they are monitored so that audits can be performed once a failure occurs or when maintenance operations are performed. However, log files are usually sizeable, and require filtering and reduction to be processed efficiently. Furthermore, there is no apparent correspondence of how logged events relate to particular use cases the system may be performing. In this thesis, we have developed a framework that is based on heuristic clustering algorithms to achieve log filtering, log reduction and, log interpretation. More specifically we define the concept of the Event Dependency Graph, and we present event filtering and use case identification techniques, that are based on event clustering. The clustering process groups together all events that relate to a collection of initial significant events that relate to a use case. We refer to these significant events as beacon events. Beacon events can be identified automatically or semiautomatically by examining log event types or event names against event types or event names in the corresponding specification of a use case being considered (e.g. events in sequence diagrams). Furthermore, the user can select other or additional initial clustering conditions based on his or her domain knowledge of the system. The clustering technique can be used in two possible ways. The first is for large logs to be reduced or sliced, with respect to a particular use case so that, operators can better focus their attention to specific events that relate to specific operations. The second is for the determination of active use cases where operators select particular seed events of interest and then examine the resulting reduced logs against events or event types stemming from different alternative known use cases being considered, in order to identify the best match and consequently provide insights on which of these alternative use cases may be running at any given time. The approach has shown very promising results towards the identification of executing use cases among various alternative ones in various runs of the Session Initiation Protocol

    Identification of behavioral and creational design patterns through dynamic analysis

    Full text link
    Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

    Regression test selection for distributed Java RMI programs by means of formal concept analysis

    Get PDF
    Software maintenance is the process of modifying an existing system to ensure that it meets current and future requirements. As a result, performing regression testing becomes an essential but time consuming aspect of any maintenance activity. Regression testing is initiated after a programmer has made changes to a program that may have inadvertently introduced errors. It is a quality control approach to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. In the literature various types of test selection techniques have been proposed to reduce the effort associated with re-executing the required test cases. However, the majority of these approach has been focusing only on sequential programs, and provide no or only very limited support for distributed programs or database-driven applications. The thesis presents a lightweight methodology, which applies Formal Concept Analysis to support a regression test selection analysis, in combination with execution trace collection and external data sharing analysis, for distributed Java RMI programs. Two Eclipse plug-ins were developed to automate the regression test selection process and to evaluate our methodology

    Object-Centric Reflection: Unifying Reflection and Bringing It Back to Objects

    Get PDF
    Reflective applications are able to query and manipulate the structure and behavior of a running system. This is essential for highly dynamic software that needs to interact with objects whose structure and behavior are not known when the application is written. Software analysis tools, like debuggers, are a typical example. Oddly, although reflection essentially concerns run-time entities, reflective applications tend to focus on static abstractions, like classes and methods, rather than objects. This is phenomenon we call the object paradox, which makes developers less effective by drawing their attention away from run-time objects. To counteract this phenomenon, we propose a purely object-centric approach to reflection. Reflective mechanisms provide object-specific capabilities as another feature. Object-centric reflection proposes to turn this around and put object-specific capabilities as the central reflection mechanism. This change in the reflection architecture allows a unification of various reflection mechanisms and a solution to the object paradox. We introduce Bifr\"ost, an object-centric reflective system based on first-class meta-objects. Through a series of practical examples we demonstrate how object-centric reflection mitigates the object paradox by avoiding the need to reflect on static abstractions. We survey existing approaches to reflection to establish key requirements in the domain, and we show that an object-centric approach simplifies the meta-level and allows a unification of the reflection field. We demonstrate how development itself is enhanced with this new approach: talents are dynamically composable units of reuse, and object-centric debugging prevents the object paradox when debugging. We also demonstrate how software analysis is benefited by object-centric reflection with Chameleon, a framework for building object-centric analysis tools and MetaSpy, a domain-specific profile

    Trace Abstraction Framework and Techniques

    Get PDF
    Understanding the behavioural aspects of software systems can help in a variety of software engineering tasks such as debugging, feature enhancement, performance analysis, and security. Software behaviour is typically represented in the form of execution traces. Traces, however, have historically been difficult to analyze due to the overwhelming size of typical traces. Trace analysis, more particularly trace abstraction and simplification, techniques have emerged to overcome the challenges of working with large traces. Existing traces analysis tools rely on some sort of visualization techniques to help software engineers make sense of trace content. Many of these techniques have been studied and found to be limited in many ways. In this thesis, we present a novel approach for trace analysis inspired by the way the human brain and perception systems operate. The idea is to mimic the psychological processes that have been developed over the years to explain how our perception system deals with huge volume of visual data. We show how similar mechanisms can be applied to the abstraction and simplification of large traces. As part of this framework, we present a novel trace analysis technique that automatically divides the content of a large trace, generated from execution of a target system, into meaningful segments that correspond to the system’s main execution phases such as initializing variables, performing a specific computation, etc. We also propose a trace sampling technique that not only reduces the size of a trace but also results in a sampled trace that is representative of the original trace by ensuring that the desired characteristics of an execution are distributed similarly in both the sampled and the original trace. Our approach is based on stratified sampling and uses the concept of execution phases as strata. Finally, we propose an approach to automatically identify the most relevant trace components of each execution phases. This approach also enables an efficient representation of the flow of phases by detecting redundant phases using a cosine similarity metric. The techniques presented in this thesis have been validated by applying to a variety of target systems. The obtained results demonstrate the effectiveness and usefulness of our methods

    Augmenting IDEs with Runtime Information for Software Maintenance

    Get PDF
    Object-oriented language features such as inheritance, abstract types, late-binding, or polymorphism lead to distributed and scattered code, rendering a software system hard to understand and maintain. The integrated development environment (IDE), the primary tool used by developers to maintain software systems, usually purely operates on static source code and does not reveal dynamic relationships between distributed source artifacts, which makes it difficult for developers to understand and navigate software systems. Another shortcoming of today's IDEs is the large amount of information with which they typically overwhelm developers. Large software systems encompass several thousand source artifacts such as classes and methods. These static artifacts are presented by IDEs in views such as trees or source editors. To gain an understanding of a system, developers have to open many such views, which leads to a workspace cluttered with different windows or tabs. Navigating through the code or maintaining a working context is thus difficult for developers working on large software systems. In this dissertation we address the question how to augment IDEs with dynamic information to better navigate scattered code while at the same time not overwhelming developers with even more information in the IDE views. We claim that by first reducing the amount of information developers have to deal with, we are subsequently able to embed dynamic information in the familiar source perspectives of IDEs to better comprehend and navigate large software spaces. We propose means to reduce or mitigate the information by highlighting relevant source elements, by explicitly representing working context, and by automatically housekeeping the workspace in the IDE. We then improve navigation of scattered code by explicitly representing dynamic collaboration and software features in the static source perspectives of IDEs. We validate our claim by conducting empirical experiments with developers and by analyzing recorded development sessions

    Vers une approche automatique pour l'extraction des règles d'affaires d'une application

    Get PDF
    Les compagnies font face à d'énormes coûts pour maintenir leurs applications informatiques. Au fil des ans, le code de ces applications a accumulé des connaissances corporatives importantes (règles d'affaires et décisions de conception). Mais, après plusieurs années d'opération et d'évolution de ce code, ces connaissances deviennent difficiles à récupérer. Les développeurs doivent donc consacrer beaucoup de leur temps à l'analyser: une activité connue sous le nom de \ud « compréhension du logiciel ». Comme il a été estimé que cette activité accapare entre 50 % et 90 % du travail d'un développeur, simplifier le processus de compréhension du logiciel peut avoir un impact significatif dans la réduction des coûts de développement et de maintenance. L'une des solutions au problème de compréhension du logiciel est la rétro-ingénierie. Celle-ci est le processus d'analyse du code source d'une application pour (1) identifier les composantes de l'application et les relations entre ces composantes et (2) créer une représentation de haut niveau de l'application. Plusieurs approches ont été proposées pour la rétro-ingénierie ; cependant, la représentation abstraite du code source extraite par la plupart de ces approches combine la logique d'affaires de l'application et son architecture (ou son infrastructure). Dans ce mémoire, nous présentons une nouvelle approche qui permet d'analyser le code source d'une application orientée objet afin d'en extraire un modèle abstrait ne décrivant que les règles d'affaires de cette application. Ce modèle prend la forme d'un diagramme de classes UML, présentant les classes d'affaires de cette application ainsi que les relations entre ces classes. Cette approche a été validée sur plusieurs systèmes (écrits en Java) de différentes tailles. L'approche donne de bons résultats pour les systèmes possédant une bonne architecture et un bon style de programmation. Dans le cas contraire, les résultats sont moins convaincants
    corecore