7 research outputs found
A Unified Format for Language Documents
We have analyzed a substantial number of language documentation
artifacts, including language standards, language specifications,
language reference manuals, as well as internal documents of
standardization bodies. We have reverse-engineered their intended
internal structure, and compared the results. The Language Document
Format (LDF), was developed to specifically support the
documentation domain. We have also integrated LDF into an
engineering discipline for language documents including tool
support, for example, for rendering language documents, extracting
grammars and samples, and migrating existing documents into LDF. The
definition of LDF, tool support for LDF, and LDF applications are
freely available through SourceForge
3rd international software language engineering conference (SLE) : pre-proceedings, October 12-13, 2010, Eindhoven, the Netherlands
We are pleased to present the proceedings of the Third International Conference on Software Language Engineering (SLE 2010). The conference will be held in Eindhoven, the Netherlands during October 12-13, 2010 and will be co-located with The Ninth International Conference on Generative Programming and Component Engineering (GPCE'10), and The Workshop on Feature-Oriented Software Development (FOSD). An important goal of SLE is to integrate the different sub-communities of the software-language-engineering community to foster cross-fertilization and strengthen research overall. The Doctoral Symposium at SLE 2010 contributes towards these goals by providing a forum for both early and late-stage PhD students to present their research and get detailed feedback and advice from other researchers. The SLE conference series is devoted to a wide range of topics related to artificial languages in software engineering. SLE is an international research forum that brings together researchers and practitioners from both industry and academia to expand the frontiers of software language engineering. SLE's foremost mission is to encourage and organize communication between communities that have traditionally looked at software languages from different, more specialized, and yet complementary perspectives. SLE emphasizes the fundamental notion of languages as opposed to any realization in specific technical spaces. In this context, the term "software language" comprises all sorts of artificial languages used in software development including general-purpose programming languages, domain-specific languages, modeling and meta-modeling languages, data models, and ontologies. Software language engineering is the application of a systematic, disciplined, quantifiable approach to the development, use, and maintenance of these languages. The SLE conference is concerned with all phases of the lifecycle of software languages; these include the design, implementation, documentation, testing, deployment, evolution, recovery, and retirement of languages. Of special interest are tools, techniques, methods, and formalisms that support these activities. In particular, tools are often based on, or automatically generated from, a formal description of the language. Hence, the treatment of language descriptions as software artifacts, akin to programs, is of particular interest - while noting the special status of language descriptions, and the tailored engineering principles and methods for modularization, refactoring, refinement, composition, versioning, co-evolution, and analysis that can be applied to them. The response to the call for papers for SLE 2010 was very enthusiastic. We received 79 full submissions from 108 initial abstract submissions. From these submissions, the Program Committee (PC) selected 25 papers: 17 full papers, five short papers, and two tool demonstration papers, resulting in an acceptance rate of 32%. To ensure the quality of the accepted papers, each submitted paper was reviewed by at least three PC members. Each paper was discussed in detail during the electronic PC meeting. A summary of this discussion was prepared by members of the PC and provided to the authors along with the reviews
MediaWiki Grammar Recovery
The paper describes in detail the recovery effort of one of the official
MediaWiki grammars. Over two hundred grammar transformation steps are reported
and annotated, leading to delivery of a level 2 grammar, semi-automatically
extracted from a community created semi-formal text using at least five
different syntactic notations, several non-enforced naming conventions,
multiple misspellings, obsolete parsing technology idiosyncrasies and other
problems commonly encountered in grammars that were not engineered properly.
Having a quality grammar will allow to test and validate it further, without
alienating the community with a separately developed grammar.Comment: 47 page
Proceedings of the First International Workshop on Bidirectional Transformations (BX 2012) Language Evolution, Metasyntactically
Abstract: Currently existing syntactic definitions employ many different notations (usually dialects of EBNF) with slight deviations among them, which prevent efficient automated processing. When changes in such notation are required either due to maintenance activities such as correction or evolution, or because a grammar collection is written in a different notation than the one required by the grammarware toolkit, we speak of metalanguage evolution: i.e., a special language evolution scenario when the language itself does not necessarily evolve, but the notation in which it is written, does. Notational changes need to be propagated to different levels, such as to parsers that used to work with the old notation, to grammars of those notations that served as explanation material, and finally to the existing grammarbase. The solution proposed in this paper, relies on composition of a notation specification and expressing notation changes as transformations of that specification. These transformation steps are coupled to changes in the notation grammar (i.e., grammar for grammars) and to changes in other grammars written in the original notation. This paper explains the general setup of such an infrastructure, with links to the prototypical implementation of the solution