85 research outputs found

    InDubio: a combinator library to disambiguate ambiguous grammars

    Get PDF
    First Online: 29 September 2020To infer an abstract model from source code is one of the main tasks of most software quality analysis methods. Such abstract model is called Abstract Syntax Tree and the inference task is called parsing. A parser is usually generated from a grammar specification of a (programming) language and it converts source code of that language into said abstract tree representation. Then, several techniques traverse this tree to assess the quality of the code (for example by computing source code metrics), or by building new data structures (e.g, flow graphs) to perform further analysis (such as, code cloning, dead code, etc). Parsing is a well established technique. In recent years, however, modern languages are inherently ambiguous which can only be fully handled by ambiguous grammars. In this setting disambiguation rules, which are usually included as part of the grammar specification of the ambiguous language, need to be defined. This approach has a severe limitation: disambiguation rules are not first class citizens. Parser generators offer a small set of rules that can not be extended or changed. Thus, grammar writers are not able to manipulate nor define a new specific rule that the language he is considering requires. In this paper we present a tool, name InDubio, that consists of an extensible combinator library of disambiguation filters together with a generalized parser generator for ambiguous grammars. InDubio defines a set of basic disambiguation rules as abstract syntax tree filters that can be combined into more powerful rules. Moreover, the filters are independent of the parser generator and parsing technology, and consequently, they can be easily extended and manipulated. This paper presents InDubio in detail and also presents our first experimental results.- (undefined

    Reusable Components of Semantic Specifications

    Get PDF
    Semantic specifications of programming languages typically have poor modularity. This hinders reuse of parts of the semantics of one language when specifying a different language – even when the two languages have many constructs in common – and evolution of a language may require major reformulation of its semantics. Such drawbacks have discouraged language developers from using formal semantics to document their designs. In the PLanCompS project, we have developed a component-based approach to semantics. Here, we explain its modularity aspects, and present an illustrative case study: a component-based semantics for Caml Light. We have tested the correctness of the semantics by running programs on an interpreter generated from the semantics, comparing the output with that produced on the standard implementation of the language. Our approach provides good modularity, facilitates reuse, and should support co-evolution of languages and their formal semantics. It could be particularly useful in connection with domain-specific languages and language-driven software development

    Reducing the Cost of Grammar-Based Testing Using Pattern Coverage

    Get PDF
    Part 2: Test Derivation MethodsInternational audienceIn grammar-based testing, context-free grammars may be used to generate relevant test inputs for language processors, or meta programs, such as programming language compilers, refactoring tools, and implementations of software quality metrics. This technique can be used to test these meta programs, but the amount of sentences, and syntax trees thereof, which needs to be generated to obtain reasonable coverage of the input language is exponential.Pattern matching is a programming language feature used often when writing meta programs. Pattern matching helps because it automates the frequently occurring task of detecting shapes in, and extracting information from syntax trees. However, meta programs which contain many patterns are difficult to test using only randomly generated sentences from grammar rules. The reason is that statistically it is uncommon to directly generate sentences which accidentally match the patterns in the code.To solve this problem, in this paper we extract information from the patterns in the code of meta programs to guide the sentence generation process. We introduce a new coverage criterion, called Pattern Coverage, which focuses on providing a test strategy to reduce the amount of test necessary cases, while covering the relevant parts of the meta program. An initial experimental evaluation is presented and the result is compared with traditional grammar-based testing

    Introduction - The LDTA tool challenge

    No full text
    \u3cp\u3eCompilers are one of the cornerstones of Computer Science and in particular for Software Development. Compiler research has a long tradition and is very mature. Nevertheless, there is hardly any standardization with respect to formalisms and tools for developing compilers. Comparison of formalisms and tools to describe compilers for languages is not a simple task. In 2011 the Language Descriptions Tools and Applications community created a challenge where formalisms and tools were to be used in constructing a compiler for the Oberon-0 language. This special issue presents the tool challenge, the Oberon-0 language, various solutions to the challenge, and some conclusions. The aim of the challenge was to develop the same compiler using different formalisms to learn about these approaches in a concrete setting.\u3c/p\u3

    Design of a proof repository architecture

    No full text
    In this paper, we introduce a proof repository architecture to build a library of proofs for first-order theorems constructed by several theorem provers. The architecture is not fixed as such, but is configured by the user. It consists of three types of components, that allow us to connect theorem provers, store proofs and manage the connections between them. These components allow for many setups, like a local database of theorems, an interconnected series of databases of systems, interconnecting many theorem provers, using a theorem prover in a client-server architecture, Software As Service etc

    ATerms for manipulation and exchange of structured data : it's all about sharing

    No full text
    Some data types are so simple that they tend to be reimplemented over and over again. This is certainly true for terms, tree-like data structures that can represent prefix formulae, syntax trees, intermediate code, and more. We first describe the motivation to introduce Annotated Terms (ATerms): unifying several term formats, optimizing storage requirements by introducing maximal subterm sharing, and providing a language-neutral exchange format. Next, we present a brief overview of the ATerm technology itself and of its wide range of applications. A discussion of competing technologies and the future of ATerms concludes the paper

    Foreword to Special issue

    Get PDF
    AbstractForewordThis volume contains the Proceedings of the First Workshop on Language Descriptions, Tools and Applications (LDTA'01). The Workshop was held in Genova, Italy on April 7, 2001, as satellite event to ETAPS'2001.This is the first edition of the Workshop on Language Descriptions, Tools and Applications. It is a combination of the “Workshop on Attribute Grammars and their Applications” and the former ASF+SDF workshops. Both workshop series had as theme the description of (programming) languages and the development and/or generation of tools for these languages based on formal descriptions. With this new workshop we hope to achieve a cross-fertilization of research in this area by bringing together researchers from various schools (attribute grammars, algebraic approaches, action semantics, operational semantics, and denotational semantics) working on language descriptions. The workshop also aims at bringing together people from the theoretical side as well as people who develop tools and work on applications.The LDTA'2001 program consists of 9 papers, which were selected from 18 submissions, one invited talk by Paul Klint on “Collaborative Development of Interactive Language Processing Tools”, and 4 tool demonstrations. The selected papers cover a broad range of themes like: object oriented tree traversal and attribute grammars, action semantics, rewriting engines, document transformation, and the use of XML and JAVA in combination with attribute grammars. The tool demonstrations include presentations of the ASF+SDF Meta-Environment, SmartTools, Stratego, and an action semantics environment.The papers in this volume were reviewed by the program committee consisting of Isabelle Attali(INRIA Sophia Antipolis)Mark van den Brand(CWI Amsterdam)Görel Hedin(Lund University)Jan Heering(CWI Amsterdam)Pierre-Etienne Moreau(LORIA Nancy)Marjan Mernik(University of Maribor)Peter D. Mosses(BRICS Aarhus)Didier Parigot(INRIA Sophia Antipolis)Günter Riedewald(University of Rostock)Eelco Visser(Utrecht University)David Watt(University of Glasgow)This volume will be published as volume 44-2 in the series Electronic Notes in Theoretical Computer Science (ENTCS). This series is published electronically through the facilities of Elsevier Science B.V. and its auspices. The volumes in the ENTCS series can be accessed at the URL http://www.elsevier.nl/locate/entcsWe would like to thank the program committee members for their help in evaluating the papers and making a scientifically interesting selection. Furthermore, we would like to thank the ETAPS organizing committee for taking care of the local organization of our workshop. We thank Elsevier for publishing these proceedings in the Electronic Notes in Theoretical Computer Science (ENTCS). Finally, we thank Professor Michael Mislove for providing and adapting the style files for ENTCS.Mark van den Brand and Didier ParigotJune, 200

    Metrics design for safety assessment

    No full text
    \u3cp\u3eContext:In the safety domain, safety assessment is used to show that safety-critical systems meet the required safety objectives. This process is also referred to as safety assurance and certification. During this procedure, safety standards are used as development guidelines to keep the risk at an acceptable level. Safety-critical systems can be assessed according to those safety standards. Objective:Due to the manual work, safety assessment processes are costly, time consuming, and hard to be estimated. The goal of this paper is to design metrics for safety assessment. These metrics can, for instance, identify costly processes in the safety assessment process. In this paper we propose a methodology to design metrics for safety assessment from different perspectives. For the demonstration and validation of our method, we focus on safety assessment in the automotive domain (ISO 26262). Method:Metrics can be identified by answering three questions. Three different sources of information have been identified for obtaining metrics: industrial interests, safety standards, and available data. For each of these sources appropriate methods have been proposed and used for obtaining the relevant metrics. These methods include GQM-based surveys, PSM-based procedure, and brainstorming. For the validation, the ISO 26262 standard has been studied for obtaining safety standard related metrics. Results:A case study in the context of the European project OPENCOSS is carried out to demonstrate the method. Finally, there are 76 metrics obtained and a validation of these metrics has been done by means of a survey amongst 24 experts from 13 project partners. Conclusion:It can be concluded that metrics for safety assessment can be derived from three sources. Different methods for designing metrics have to be used for each source. The validation shows that most of the relevant metrics are useful for industry.\u3c/p\u3

    Applying architecture preservation core for product line stretching

    Get PDF
    The product line engineering approach is receiving a broad interest to decrease the cost of development and time to market, and to increase product quality as software becomes more and more important for companies in all markets. Although a significant amount of research has been done to define a method for introducing product line engineering in an organization, these methods are limited when a product line stretches over time. When stretching a product line, the evolution of a product line and its products may require fundamental change of the software architecture and consequently result in discontinuous evolutions. In this paper, we discuss the issues of software architecture with respect to discontinuing evolution and present an economic model based on the architecture preservation core concept to influence the product line stretching decision
    • …
    corecore