6 research outputs found

    Incremental and Modular Context-sensitive Analysis

    Full text link
    Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.Comment: 56 pages, 27 figures. To be published in Theory and Practice of Logic Programming. v3 corresponds to the extended version of the ICLP2018 Technical Communication. v4 is the revised version submitted to Theory and Practice of Logic Programming. v5 (this one) is the final author version to be published in TPL

    Homeostasis: Design and Implementation of a Self-Stabilizing Compiler

    Full text link
    Mainstream compilers perform a multitude of analyses and optimizations on the given input program. Each analysis (such as points-to analysis) may generate a program-abstraction (such as points-to graph). Each optimization is typically composed of multiple alternating phases of inspection of such program-abstractions and transformations of the program. Upon transformation of a program, the program-abstractions generated by various analyses may become inconsistent with the modified program. Consequently, the correctness of the downstream inspection (and consequent transformation) phases cannot be ensured until the relevant program-abstractions are stabilized; that is, the program-abstractions are either invalidated or made consistent with the modified program. In general, the existing compiler frameworks do not perform automated stabilization of the program-abstractions and instead leave it to the compiler pass writers to deal with the complex task of identifying the relevant program-abstractions to be stabilized, the points where the stabilization is to be performed, and the exact procedure of stabilization. In this paper, we address these challenges by providing the design and implementation of a novel compiler-design framework called Homeostasis. Homeostasis automatically captures all the program changes performed by each transformation phase, and later, triggers the required stabilization using the captured information, if needed. We also provide a formal description of Homeostasis and a correctness proof thereof. To assess the feasibility of using Homeostasis in compilers of parallel programs, we have implemented our proposed idea in IMOP, a compiler framework for OpenMP C programs. We present an evaluation which demonstrates that Homeostasis is efficient and easy to use.Comment: Accepted for publication in ACM TOPLAS. 58 pages, 36 figures. Patent granted in India. US Patent published. For associated code, see https://github.com/anonymousoopsla21/homeostasi

    Applications of agent architectures to decision support in distributed simulation and training systems

    Get PDF
    This work develops the approach and presents the results of a new model for applying intelligent agents to complex distributed interactive simulation for command and control. In the framework of tactical command, control communications, computers and intelligence (C4I), software agents provide a novel approach for efficient decision support and distributed interactive mission training. An agent-based architecture for decision support is designed, implemented and is applied in a distributed interactive simulation to significantly enhance the command and control training during simulated exercises. The architecture is based on monitoring, evaluation, and advice agents, which cooperate to provide alternatives to the dec ision-maker in a time and resource constrained environment. The architecture is implemented and tested within the context of an AWACS Weapons Director trainer tool. The foundation of the work required a wide range of preliminary research topics to be covered, including real-time systems, resource allocation, agent-based computing, decision support systems, and distributed interactive simulations. The major contribution of our work is the construction of a multi-agent architecture and its application to an operational decision support system for command and control interactive simulation. The architectural design for the multi-agent system was drafted in the first stage of the work. In the next stage rules of engagement, objective and cost functions were determined in the AWACS (Airforce command and control) decision support domain. Finally, the multi-agent architecture was implemented and evaluated inside a distributed interactive simulation test-bed for AWACS Vv\u27Ds. The evaluation process combined individual and team use of the decision support system to improve the performance results of WD trainees. The decision support system is designed and implemented a distributed architecture for performance-oriented management of software agents. The approach provides new agent interaction protocols and utilizes agent performance monitoring and remote synchronization mechanisms. This multi-agent architecture enables direct and indirect agent communication as well as dynamic hierarchical agent coordination. Inter-agent communications use predefined interfaces, protocols, and open channels with specified ontology and semantics. Services can be requested and responses with results received over such communication modes. Both traditional (functional) parameters and nonfunctional (e.g. QoS, deadline, etc.) requirements and captured in service requests

    Change-Based Approaches for Static Taint Analyses

    Get PDF
    Les logiciels développés dans les dernières années ont souvent été aux prises avec des vulnérabilités qui ont été exploitées par des personnes mal intentionnées. Certaines de ces attaques ont coûté cher à plusieurs entreprises et particuliers dus aux données volées. Ainsi, il y a un besoin réel de déceler ces failles dans le code avant leur utilisation. Une technique pour tenter de détecter de possibles vulnérabilités dans le code avant même que le logiciel soit public est d’utiliser un outil d’analyse statique. En utilisant plus particulièrement l’analyse de teinte qui a pour but de simuler la propagation de données critiques dans le programme, il est possible de trouver des points d’accès dans le code qui ne sont pas protégés. Cependant, ces analyses peuvent prendre des heures selon l’outil utilisé et le volume de code à analyser. En plus d’être de longues analyses, elles sont souvent effectuées à répétition sur le même code au fur et à mesure qu’il est développé. Chaque fois, c’est un calcul exhaustif à partir de zéro alors que les changements dans le code sont généralement minimes en comparaison au volume total du logiciel. Instinctivement, on pourrait supposer que si les changements sont mineurs dans le code, les changements dans les résultats devraient aussi l’être. Dans ce mémoire, nous nous penchons sur des techniques tirant avantage de la nature incrémentale du développement logiciel afin d’accélérer le calcul de la teinte. Notre travail propose une technique novatrice qui met à jour la teinte en fonction des changements dans le code entre deux versions. Nous utilisons une technique qui est granulaire à la ligne de code. Avec nos améliorations, nous réussissons à largement réduire le temps de calcul nécessaire sur les projets que nous avons analysés.----------ABSTRACT: In the past few years, many security problems have been discovered in all kinds of software. For some of these vulnerabilities, ill-intentioned people exploited them and successfully stole information about people and companies. The monetary cost of these vulnerabilities is real, and for this reason, many developers are trying to find these vulnerabilities before hackers do. One method to find vulnerabilities in an application before it is published is to use a static analysis tool. Taint analysis is one static analysis technique that is closely related to security. This approach simulates how data is propagated inside the application with the goal of finding locations that could either leak sensitive information or damage the integrity of the system. These analyses can take hours, depending on the tool used and the size of the codebase being analyzed. On top of taking considerable time, these analyses tend to be repeated over and over during development. The computation of taint is exhaustive and usually redone from scratch on every execution, even if the codebase has stayed nearly the same since the last analysis. This is why our work will mainly focus on how we can take advantage of the incremental nature of software development to accelerate the computation of taint. We propose new techniques to update the taint information from the changes in the source code between two versions of a given software. Our approaches are granular to the lines of code and succeed at greatly reducing the time required to find potential vulnerabilities in the projects that we analyzed

    An Instantaneous Framework For Concurrency Bug Detection

    Get PDF
    Concurrency bug detection is important to guarantee the correct behavior of multithread programs. However, existing static techniques are expensive with false positives, and dynamic analyses cannot expose all potential bugs. This thesis presents an ultra-efficient concurrency analysis framework, D4, that detects concurrency bugs (e.g., data races and deadlocks) “instantly” in the programming phase. As developers add, modify, and remove statements, the changes are sent to D4 to detect concurrency bugs on-the-fly, which in turn provides immediate feedback to the developer of the new bugs. D4 includes a novel system design and two novel parallel incremental algorithms that embrace both change and parallelization for fundamental static analyses of concurrent programs. Both algorithms react to program changes by memoizing the analysis results and only recomputing the impact of a change in parallel without any redundant computation. Our evaluation on an extensive collection of large real-world applications shows that D4 efficiently pinpoints concurrency bugs within 10ms on average after a code change, several orders of magnitude faster than both the exhaustive analysis and the state-of-the-art incremental techniques
    corecore