451 research outputs found

    Activity Report 2012. Project-Team RMOD. Analyses and Languages Constructs for Object-Oriented Application Evolution

    Get PDF
    Activity Report 2012 Project-Team RMOD Analyses and Languages Constructs for Object-Oriented Application Evolutio

    Project-Team RMoD 2013 Activity Report

    Get PDF
    Activity Report 2013 Project-Team RMOD Analyses and Languages Constructs for Object-Oriented Application Evolutio

    RTLabOS Dissemination Activities:RTLabOS D4.2

    Get PDF

    Die Integration von Verifikation und Test in Übersetzungssysteme

    Get PDF
    In dieser Arbeit wird die Architektur für einen Compiler vorgestellt, der die Korrektheit der übersetzten Quellen als Teil des Übersetzungsvorgangs überprüfen kann. Dabei soll es möglich sein, verschiedene Methoden, wie etwa formaler Test und formaler Beweis, einzusetzen, um die Korrektheit nachzuweisen. Ein vollautomatischer Nachweis ist sehr aufwendig und häufig auch gar nicht möglich. Es ist also nicht praktikabel, aus Spezifikation und Programm die Korrektheit automatisch abzuleiten. Wir erweitern daher die Sprache um Korrektheitsnachweise (justifications), die der Benutzer in den Quelltext einfügen muß ("literate justification"). Je nach gewählter Methode muß der Benutzer den Korrektheitsnachweis mehr oder weniger genau ausführen. Durch die Einführung der Korrektheitsnachweise muß der Übersetzer Beweise nur noch überprüfen anstatt sie automatisch abzuleiten. Die Überprüfung der Korrektheitsnachweise kann in den Übersetzer integriert werden oder an ein externes Werkzeug delegiert werden. Ein externes Werkzeug erlaubt die Einbindung bereits existierender Werkzeuge, aber auch eine Neuentwicklung eigener Werkzeuge ist möglich. Wir zeigen am Beispiel eines taktischen Theorembeweisers, daß eine Eigenentwicklung nicht unbedingt aufwendiger ist als die Anpassung eines vorhandenen Werkzeugs. Um Tests während der Übersetzung durchführen zu können, muß ein Interpreter zur Verfügung stehen. Die Ausführung ungetesteten Codes birgt allerdings auch Sicherheitsprobleme. Wir diskutieren verschiedene Möglichkeiten, mit diesem Problem umzugehen. Der Korrektheit einer Übersetzungseinheit entspricht in der Semantik die Konsistenz einer algebraischen Spezifikation. Wir betrachten zwei Beweismethoden: zum einen durch Konstruktion eines Modells und zum andern durch Nachweis einer korrektheitserhaltenden Relation. Die Beweisverpflichtungen ergeben sich zunächst aus der Beweismethode, außerdem werden Beweisverpflichtungen eingeführt, um die Korrektheit von zusammengesetzten (modularen) Programmen zuzusichern. Die in dieser Arbeit beschriebene Architektur ist prototypisch implementiert worden. Dazu wurde das Opal-System um Elemente zur Spezifikation und zur Beschreibung von Korektheitsnachweisen erweitert. In der Arbeit werden einige kurze Beispiele vorgeführt. Der Opal/J-Prototyp ist seit Version 2.3e Teil der Opal-Distribution.In this thesis we present a compiler architecture that enables the compiler to check the correctness of the source code as part of the compilation process. It allows to perform these correctness checks with different methods, in particular formal testing and formal proof. A fully automated check is very expensive and often impossible. Hence, it is not feasible to check correctness automatically with the help of specification and implementation. We extend the programming language by (correctness) justifications that the user must insert into the source code ("literate justification"). Depending on the chosen justification method the user must work out the justification in more or less detail. The introduction of justifications changes the compiler's task from deriving a correctness proof by itself to checking a correctness proof provided by the user. The correctness check for justifications can be integrated into the compiler or delegated to an external tool. An external tool allows the integration of existing tools but the development of specialized tools is also possible. The example development of a specialized tactical theorem-prover shows that the development of a specialized tool is not necessarily more expensive than the adaptation of an existing tool. For test execution during the compilation an interpreter must be available. The execution of untested code causes security risks. We discuss different possibilities to deal with this problem. The correctnessof a compilation unit corresponds to the consistency of an algebraic specification. We study two proof methods: either by construction of a model or by establishing a correctness-preserving relation. The proofo bligations result from the proof method, in addition proof obligations arei ntroduced to ensure the correctness of modular programs. The compiler architecture described in this thesis has been prototypically implemented. The Opal system has been extended with language elements to denote specifications and (correctness) justifications. The thesis presents some short examples. The Opal/J prototype is part of the Opal distribution since version 2.3e

    Deep Static Modeling of invokedynamic

    Get PDF
    Java 7 introduced programmable dynamic linking in the form of the invokedynamic framework. Static analysis of code containing programmable dynamic linking has often been cited as a significant source of unsoundness in the analysis of Java programs. For example, Java lambdas, introduced in Java 8, are a very popular feature, which is, however, resistant to static analysis, since it mixes invokedynamic with dynamic code generation. These techniques invalidate static analysis assumptions: programmable linking breaks reasoning about method resolution while dynamically generated code is, by definition, not available statically. In this paper, we show that a static analysis can predictively model uses of invokedynamic while also cooperating with extra rules to handle the runtime code generation of lambdas. Our approach plugs into an existing static analysis and helps eliminate all unsoundness in the handling of lambdas (including associated features such as method references) and generic invokedynamic uses. We evaluate our technique on a benchmark suite of our own and on third-party benchmarks, uncovering all code previously unreachable due to unsoundness, highly efficiently

    Status Report of the DPHEP Study Group: Towards a Global Effort for Sustainable Data Preservation in High Energy Physics

    Full text link
    Data from high-energy physics (HEP) experiments are collected with significant financial and human effort and are mostly unique. An inter-experimental study group on HEP data preservation and long-term analysis was convened as a panel of the International Committee for Future Accelerators (ICFA). The group was formed by large collider-based experiments and investigated the technical and organisational aspects of HEP data preservation. An intermediate report was released in November 2009 addressing the general issues of data preservation in HEP. This paper includes and extends the intermediate report. It provides an analysis of the research case for data preservation and a detailed description of the various projects at experiment, laboratory and international levels. In addition, the paper provides a concrete proposal for an international organisation in charge of the data management and policies in high-energy physics

    Strengths and limitations of the draft classification of public health interventions in the World Health Organization’s International Classification of Health Interventions: A developmental appraisal

    Get PDF
    Statistical classifications provide a basis for collecting and analysing data, for building knowledge and communicating information. A classification of public health interventions is being developed as part of the World Health Organization’s International Classification of Health Interventions (ICHI). This is a pioneering development, as there have been no previous efforts to produce a standard classification of public health interventions. A comprehensive developmental appraisal of the draft classification of public health interventions was undertaken to gain an understanding of its strengths and limitations, and to identify what should be done to improve its utility. The classification was used to code three data sets of public health interventions, to identify problems encountered and to assess inter-coder reliability. Views of potential users were elicited through key-informant interviews. An analytical structure was developed, comprising a set of criteria concerning the desired features of a statistical classification and a model representing the main elements that make up a statistical classification. ICHI was found to have some utility for representing data on public health interventions. Limitations identified included coverage gaps, overlap between categories, lack of clarity concerning how the classification axes are operationalised for public health interventions, and difficulty splitting complex interventions into their constituent components for coding. This study makes a significant and timely contribution to the development of the draft classification, by providing specific proposals for improvements to ICHI, explicating some fundamental conceptual issues that should be addressed, and indicating a path forward for the further development and use of ICHI in the field of public health. The analytical structure developed through the conduct of this research represents a novel methodological contribution to the field of classification development

    Project-Team RMoD (Analyses and Language Constructs for Object-Oriented Application Evolution) 2011 Activity Report

    Get PDF
    This is the yearly report of the RMOD team (http://rmod.lille.inria.fr/). A good way to understand what we are doing
    corecore