22 research outputs found

    Codeklonerkennung mit Dominatorinformationen

    Get PDF
    If an existing function in a software project is copied and reused (in a slightly modified version), the result is a code clone. If there was an error or vulnerability in the original function, this error or vulnerability is now contained in several places in the software project. This is one of the reasons why research is being done to develop powerful and scalable clone detection techniques. In this thesis, a new clone detection method is presented that uses paths and path sets derived from the dominator trees of the functions to detect the code clones. A dominator tree is a special form of the control flow graph, which does not contain cycles. The dominator tree based method has been implemented in the StoneDetector tool and can detect code clones in Java source code as well as in Java bytecode. It has equally good or better recall and precision results than previously published code clone detection methods. The evaluation was performed using the BigCloneBench. Scalability measurements showed that even source code with several 100 million lines of code can be searched in a reasonable time. In order to evaluate the bytecode based StoneDetector variant, the BigCloneBench files had to be compiled. For this purpose, the Stubber tool was developed, which can compile Java source code files without the required libraries. Finally, it could be shown that using the register code generated from the Java bytecode, similar recall and precision values could be achieved compared to the source code based variant. Since some machine learning studies specify that very good recall and precision values can be achieved for all clone types, a machine learning method was trained with dominator trees. It could be shown that the results published by the studies are not reproducible on unseen data.Wird eine bestehende Funktion in einem Softwareprojekt kopiert und (in leicht angepasster Form) erneut genutzt, entsteht ein Codeklon. War in der ursprünglichen Funktion jedoch ein Fehler oder eine Schwachstelle, so ist dieser Fehler beziehungsweise diese Schwachstelle jetzt an mehreren Stellen im Softwareprojekt enthalten. Dies ist einer der Gründe, weshalb an der Entwicklung von leistungsstarken und skalierbaren Klonerkennungsverfahren geforscht wird. In der hier vorliegenden Arbeit wird ein neues Klonerkennungsverfahren vorgestellt, das zum Detektieren der Codeklone Pfade und Pfadmengen nutzt, die aus den Dominatorbäumen der Funktionen abgeleitet werden. Ein Dominatorbaum wird aus dem Kontrollflussgraphen abgeleitet und enthält keine Zyklen. Das Dominatorbaum-basierte Verfahren wurde in dem Werkzeug StoneDetector umgesetzt und kann Codeklone sowohl im Java-Quelltext als auch im Java-Bytecode detektieren. Dabei hat es gleich gute oder bessere Recall- und Precision-Werte als bisher veröffentlichte Codeklonerkennungsverfahren. Die Wert-Evaluierungen wurden dabei unter Verwendung des BigClone-Benchs durchgeführt. Skalierbarkeitsmessungen zeigten, dass sogar Quellcodedateien mit mehreren 100-Millionen Codezeilen in angemessener Zeit durchsucht werden können. Damit die Bytecode-basierte StoneDetector-Variante auch evaluiert werden konnte, mussten die Dateien des BigCloneBench kompiliert werden. Dazu wurde das Stubber-Tool entwickelt, welches Java-Quelltextdateien ohne die benötigten Abhängigkeiten kompilieren kann. Schlussendlich konnte somit gezeigt werden, dass mithilfe des aus dem Java-Bytecode generierten Registercodes ähnliche Recall- und Precision-Werte im Vergleich zu der Quelltext-basierten Variante erreicht werden können. Da einige Arbeiten mit maschinellen Lernverfahren angeben, bei allen Klontypen sehr gute Recall- und Precision-Werte zu erreichen, wurde ein maschinelles Lernverfahren mit Dominatoräumen trainiert. Es konnte gezeigt werden, dass die von den Arbeiten veröffentlichten Ergebnisse nicht auf ungesehenen Daten reproduzierbar sind

    Improving WCET Evaluation using Linear Relation Analysis

    Get PDF
    International audienceThe precision of a worst case execution time (WCET) evaluation tool on a given program is highly dependent on how the tool is able to detect and discard semantically infeasible executions of the program. In this paper, we propose to use the classical abstract interpretation-based method of linear relation analysis to discover and exploit relations between execution paths. For this purpose, we add auxiliary variables (counters) to the program to trace its execution paths. The results are easily incorporated in the classical workflow of a WCET evaluator, when the evaluator is based on the popular implicit path enumeration technique. We use existing tools-a WCET evaluator and a linear relation analyzer-to build and experiment a prototype implementation of this idea. * This work is supported by the French research fundation (ANR) as part of the W-SEPT project (ANR-12-INSE-0001

    Renewable 2018 Global Status

    Get PDF

    WICC 2016 : XVIII Workshop de Investigadores en Ciencias de la Computación

    Get PDF
    Actas del XVIII Workshop de Investigadores en Ciencias de la Computación (WICC 2016), realizado en la Universidad Nacional de Entre Ríos, el 14 y 15 de abril de 2016.Red de Universidades con Carreras en Informática (RedUNCI

    Renewables 2015 Global Status Report

    Get PDF
    The REN21 Renewables Global Status Report (GSR) provides an annual look at the tremendous advances in renewable energy markets, policy frameworks and industries globally. Each report uses formal and informal data to provide the most up-to-date information available. Reliable, timely and regularly updated data on renewable energy are essential as they are used for establishing baselines for decision makers; for demonstrating the increasing role that renewables play in the energy sector; and illustrating that the renewable energy transition is a reality. This year 19s GSR marks 10 years of REN21 reporting. Over the past decade the GSR has expanded in scope and depth with its thematic and regional coverage and the refinement of data collection. The GSR is the product of systematic data collection resulting in thousands of data points, the use of hundreds of documents, and personal communication with experts from around the world. It benefits from a multi-stakeholder community of over 500 experts

    Program Analysis as Model Checking

    Get PDF

    Coherent Dependence Cluster

    Get PDF
    This thesis introduces coherent dependence clusters and shows their relevance in areas of software engineering such as program comprehension and mainte- nance. All statements in a coherent dependence cluster depend upon the same set of statements and affect the same set of statements; a coherent cluster’s statements have ‘coherent’ shared backward and forward dependence. We introduce an approximation to efficiently locate coherent clusters and show that its precision significantly improves over previous approximations. Our empirical study also finds that, despite their tight coherence constraints, coherent dependence clusters are to be found in abundance in production code. Studying patterns of clustering in several open-source and industrial programs reveal that most contain multiple significant coherent clusters. A series of case studies reveal that large clusters map to logical functionality and pro- gram structure. Cluster visualisation also reveals subtle deficiencies of program structure and identify potential candidates for refactoring efforts. Supplemen- tary studies of inter-cluster dependence is presented where identification of coherent clusters can help in deriving hierarchical system decomposition for reverse engineering purposes. Furthermore, studies of program faults find no link between existence of coherent clusters and software bugs. Rather, a longi- tudinal study of several systems find that coherent clusters represent the core architecture of programs during system evolution. Due to the inherent conservativeness of static analysis, it is possible for unreachable code and code implementing cross-cutting concerns such as error- handling and debugging to link clusters together. This thesis studies their effect on dependence clusters by using coverage information to remove unexecuted and rarely executed code. Empirical evaluation reveals that code reduction yields smaller slices and clusters

    Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis

    Get PDF
    Model-based performance prediction systematically deals with the evaluation of software performance to avoid for example bottlenecks, estimate execution environment sizing, or identify scalability limitations for new usage scenarios. Such performance predictions require up-to-date software performance models. This book describes a new integrated reverse engineering approach for the reconstruction of parameterised software performance models (software component architecture and behaviour)

    Architectural design, 1954-1972.

    Get PDF
    This thesis examines the architectural magazine's contribution to the writing of modern architectural history using the magazine Architectural Design (AD) as a case study. There are four main narratives to this research, one "grand" and three "micro"; The overarching grand narrative (or meta-narrative) is the proposal to replace the existing art historical formulation of architectural history with a more holistic understanding of history based on power struggles in the field of architecture. This strategy is derived from an application of Pierre Bourdieu's theoretical framework to the field of architectural cultural production. The position of the architectural magazine as an institution in the construction of the architectural profession, and the ever-changing definition of architecture is one underlying micro-narrative. The introduction discusses the role that the architectural magazine played in the emergence of the modern architectural profession, alongside other institutions, specifically the academy and professional bodies. The central, and largest, micro-narrative is a critical history of the magazine Architectural Design from 1954 to 1972. Brief biographies of its editors and a background to the magazine from its inception in 1930 up to 1953 precede this by way of contextualisation. This history of AD discusses the content and context of the magazine and traces its shift from a professional architectural magazine to an autonomous. "little" magazine, focussing on several key structural themes that underpin the magazine. Throughout, the role that AD played in the promotion of the post-war neo-avant-garde, in particular the New Brutalists and Archigram, is documented and the relationships between the small circle of people privileged to produce and contribute to the magazine, and AD's rivalry with the Architectural Review are highlighted. The final micro-narrative is a reading of post-war modem architectural history from 1954 to 1972 through the pages of AD, tracing the rise and demise of modem architecture in terms of three defining shifts from the period evident in the magazine: "high to low"; "building to architecture"; and "hard to soft". This period also coincides exactly with the life of the Pruitt Igoe housing blocks in SI. Louis whose demolition, according to Jencks, represented the death of modern architecture. A growing post-modern sensibility in architecture is manifest in the magazine through an increasing resistance to modernist thinking. This study consciously employs post-modern methodologies to a period of modern architecture in an attempt to disturb modernist mythologies that have ossified into history
    corecore