1,400 research outputs found

    Open Programming Language Interpreters

    Get PDF
    Context: This paper presents the concept of open programming language interpreters and the implementation of a framework-level metaobject protocol (MOP) to support them. Inquiry: We address the problem of dynamic interpreter adaptation to tailor the interpreter's behavior on the task to be solved and to introduce new features to fulfill unforeseen requirements. Many languages provide a MOP that to some degree supports reflection. However, MOPs are typically language-specific, their reflective functionality is often restricted, and the adaptation and application logic are often mixed which hardens the understanding and maintenance of the source code. Our system overcomes these limitations. Approach: We designed and implemented a system to support open programming language interpreters. The prototype implementation is integrated in the Neverlang framework. The system exposes the structure, behavior and the runtime state of any Neverlang-based interpreter with the ability to modify it. Knowledge: Our system provides a complete control over interpreter's structure, behavior and its runtime state. The approach is applicable to every Neverlang-based interpreter. Adaptation code can potentially be reused across different language implementations. Grounding: Having a prototype implementation we focused on feasibility evaluation. The paper shows that our approach well addresses problems commonly found in the research literature. We have a demonstrative video and examples that illustrate our approach on dynamic software adaptation, aspect-oriented programming, debugging and context-aware interpreters. Importance: To our knowledge, our paper presents the first reflective approach targeting a general framework for language development. Our system provides full reflective support for free to any Neverlang-based interpreter. We are not aware of any prior application of open implementations to programming language interpreters in the sense defined in this paper. Rather than substituting other approaches, we believe our system can be used as a complementary technique in situations where other approaches present serious limitations

    Application of shape grammar theory to underground rail station design and passenger evacuation

    Get PDF
    This paper outlines the development of a computer design environment that generates station ‘reference’ plans for analysis by designers at the project feasibility stage. The developed program uses the theoretical concept of shape grammar, based upon principles of recognition and replacement of a particular shape to enable the generation of station layouts. The developed novel shape grammar rules produce multiple plans of accurately sized infrastructure faster than by traditional means. A finite set of station infrastructure elements and a finite set of connection possibilities for them, directed by regulations and the logical processes of station usage, allows for increasingly complex composite shapes to be automatically produced, some of which are credible station layouts at ‘reference’ block plan level. The proposed method of generating shape grammar plans is aligned to London Underground standards, in particular to the Station Planning Standards and Guidelines 5th edition (SPSG5 2007) and the BS-7974 fire safety engineering process. Quantitative testing is via existing evacuation modelling software. The prototype system, named SGEvac, has both the scope and potential for redevelopment to any other country’s design legislation

    Model-driven engineering approach to design and implementation of robot control system

    Full text link
    In this paper we apply a model-driven engineering approach to designing domain-specific solutions for robot control system development. We present a case study of the complete process, including identification of the domain meta-model, graphical notation definition and source code generation for subsumption architecture -- a well-known example of robot control architecture. Our goal is to show that both the definition of the robot-control architecture and its supporting tools fits well into the typical workflow of model-driven engineering development.Comment: Presented at DSLRob 2011 (arXiv:cs/1212.3308

    An automated model based approach to mobile UI specification and development

    Get PDF
    One of the problems of current software development lies on the existence of solutions to address properly the code portability for the increasing number of platforms. To build abstract models is one efficient and correct way to achieve this. The Model-Driven Software Engineering (MDSE) is a development methodology where models are the key for all project lifecycle, from requisites gathering, through modelling and to the development stage, as well as on testing. Pervasive computing demands the use of several technical specifications, such as wireless connections, advanced electronics, and the Internet, as well as it stresses the need to adjust the user interface layer to each one of the platforms. Using a model-driven approach it is possible to reuse software solutions between different targets, since models are not affected by the device diversity and its evolution. This paper reports on a tool, which is highly parameterizable and driven to support Model-2-Model and Model-2-Code transformations. Also, instead of using a predefined technology, the tool was built to be scalable and extensible for many different targets and also by addressing the user interface layer generation.This work is financed by the ERDF ? European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within projectPOCI-01-0145-FEDER-006961, and by National Funds through the FCT ? Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) as part of project UID/EEA/50014/2013.info:eu-repo/semantics/publishedVersio

    Grammar Sharing Techniques for Rule-Based Multilingual NLP Systems

    Get PDF
    Proceedings of the 16th Nordic Conference of Computational Linguistics NODALIDA-2007. Editors: Joakim Nivre, Heiki-Jaan Kaalep, Kadri Muischnek and Mare Koit. University of Tartu, Tartu, 2007. ISBN 978-9985-4-0513-0 (online) ISBN 978-9985-4-0514-7 (CD-ROM) pp. 253-260

    Model-Driven Development of Complex and Data-Intensive Integration Processes

    Get PDF
    Due to the changing scope of data management from centrally stored data towards the management of distributed and heterogeneous systems, the integration takes place on different levels. The lack of standards for information integration as well as application integration resulted in a large number of different integration models and proprietary solutions. With the aim of a high degree of portability and the reduction of development efforts, the model-driven development—following the Model-Driven Architecture (MDA)—is advantageous in this context as well. Hence, in the GCIP project (Generation of Complex Integration Processes), we focus on the model-driven generation and optimization of integration tasks using a process-based approach. In this paper, we contribute detailed generation aspects and finally discuss open issues and further challenges

    Complexity of Lexical Descriptions and its Relevance to Partial Parsing

    Get PDF
    In this dissertation, we have proposed novel methods for robust parsing that integrate the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniques. Our thesis is that the computation of linguistic structure can be localized if lexical items are associated with rich descriptions (supertags) that impose complex constraints in a local context. However, increasing the complexity of descriptions makes the number of different descriptions for each lexical item much larger and hence increases the local ambiguity for a parser. This local ambiguity can be resolved by using supertag co-occurrence statistics collected from parsed corpora. We have explored these ideas in the context of Lexicalized Tree-Adjoining Grammar (LTAG) framework wherein supertag disambiguation provides a representation that is an almost parse. We have used the disambiguated supertag sequence in conjunction with a lightweight dependency analyzer to compute noun groups, verb groups, dependency linkages and even partial parses. We have shown that a trigram-based supertagger achieves an accuracy of 92.1‰ on Wall Street Journal (WSJ) texts. Furthermore, we have shown that the lightweight dependency analysis on the output of the supertagger identifies 83‰ of the dependency links accurately. We have exploited the representation of supertags with Explanation-Based Learning to improve parsing effciency. In this approach, parsing in limited domains can be modeled as a Finite-State Transduction. We have implemented such a system for the ATIS domain which improves parsing eciency by a factor of 15. We have used the supertagger in a variety of applications to provide lexical descriptions at an appropriate granularity. In an information retrieval application, we show that the supertag based system performs at higher levels of precision compared to a system based on part-of-speech tags. In an information extraction task, supertags are used in specifying extraction patterns. For language modeling applications, we view supertags as syntactically motivated class labels in a class-based language model. The distinction between recursive and non-recursive supertags is exploited in a sentence simplification application
    corecore