11 research outputs found

    Converting Ontologies into DSLs

    Get PDF
    This paper presents a project whose main objective is to explore the Ontological-based development of Domain Specific Languages (DSL), more precisely, of their underlying Grammar. After reviewing the basic concepts characterizing Ontologies and Domain-Specific Languages, we introduce a tool, Onto2Gra, that takes profit of the knowledge described by the ontology and automatically generates a grammar for a DSL that allows to discourse about the domain described by that ontology. This approach represents a rigorous method to create, in a secure and effective way, a grammar for a new specialized language restricted to a concrete domain. The usual process of creating a grammar from the scratch is, as every creative action, difficult, slow and error prone; so this proposal is, from a Grammar Engineering point of view, of uttermost importance. After the grammar generation phase, the Grammar Engineer can manipulate it to add syntactic sugar to improve the final language quality or even to add semantic actions. The Onto2Gra project is composed of three engines. The main one is OWL2DSL, the component that converts an OWL ontology into an attribute grammar. The two additional modules are Onto2OWL, converts ontologies written in OntoDL (a light-weight DSL to describe ontologies) into standard OWL, and DDesc2OWL, converts domain instances written in the DSL generated by OWL2DSL into the initial OWL ontology

    Converting ontologies into DSLs

    Get PDF
    This paper presents a project whose main objective is to explore the Ontological-based development of Domain Specific Languages (DSL), more precisely, of their underlying Grammar. After reviewing the basic concepts characterizing Ontologies and Domain-Specific Languages, we introduce a tool, OWL2Gra, that takes profit of the knowledge described by the ontology and automatically generates a grammar for a DSL that allows to discourse about the domain described by that ontology. This approach represents a rigorous method to create, in a secure and effective way, a grammar for a new specialized language restricted to a concrete domain. The usual process of creating a grammar from the scratch is, as every creative action, difficult, slow and error prone; so this proposal is, from a Grammar Engineering point of view, of uttermost importance. After the grammar generation phase, the Grammar Engineer can manipulate it to add syntactic sugar to improve the final language quality or even to add semantic actions. The OWL2Gra project is composed of three engines. The main one is OWL2DSL, the component that converts an OWL ontology into an attribute grammar. The two additional modules are Onto2OWL and Ddesc2OWL. The former, Onto2OWL, converts ontologies written in OntoDL (a light-weight DSL to describe ontologies) into standard OWL XML that can be loaded into the well known Proteg´ e sys- ´ tem to future editing; the later, Ddesc2OWL, converts domain instances written in the DSL generated by OWL2DSL into the initial OWL ontology. Ddesc2OWL plays an important role because it allows for the population of the original ontology with concept and relation instances extracted from the new language concrete sentences this allow a faster ontology population

    Constructive-Synthesizing Modelling of Ontological Document Management Support for the Railway Train Speed Restrictions

    Get PDF
    Purpose. During the development of railway ontologies, it is necessary to take into account both the data of information systems and regulatory support to check their consistency. To do this, data integration is performed. The purpose of the work is to formalize the methods for integrating heterogeneous sources of information and ontology formation. Methodology. Constructive-synthesizing modelling of ontology formation and its resources was developed. Findings. Ontology formation formalization has been performed, which allows expanding the possibilities of automating the integration and coordination of data using ontologies. In the future, it is planned to expand the structural system for the formation of ontologies based on textual sources of railway regulatory documentation and information systems. Originality. The authors laid the foundations of using constructive-synthesizing modelling in the railway transport ontological domain to form the structure and data of the railway train speed restriction warning tables (database and csv format), their transformation into a common tabular format, vocabulary, rules and ontology individuals, as well as ontology population. Ontology learning methods have been developed to integrate data from heterogeneous sources. Practical value. The developed methods make it possible to integrate heterogeneous data sources (the structure of the table of the railway train management rules, the form and application for issuing a warning), which are railway domain-specific. It allows forming an ontology from its data sources (database and csv formats) to schema and individuals. Integration and consistency of information system data and regulatory documentation is one of the aspects of increasing the level of train traffic safety

    An Ontological Framework for Opportunistic Composition of IoT Systems

    Get PDF
    As the number of connected devices rapidly increases, largely thanks to uptake of IoT technologies, there is significant stimulus to enable opportunistic interactions between different systems that encounter each other at run time. However, this is complicated by diversity in IoT technologies and implementation details that are not known in advance. To achieve such unplanned interactions, we use the concept of a holon to represent a system's services and requirements at a high level. A holon is a self-describing system that appears as a whole when viewed from above whilst potentially comprising multiple sub-systems when viewed from below. In order to realise this world view and facilitate opportunistic system interactions, we propose the idea of using ontologies to define and program a holon. Ontologies offer the ability to classify the concepts of a domain, and use this formalised knowledge to infer new knowledge through reasoning. In this paper, we design a holon ontology and associated code generation tools. We also explore a case study of how programming holons using this approach can aid an IoT system to self-describe and reason about other systems it encounters. As such, developers can develop system composition logic at a high-level without any preconceived notions about low-level implementation details. © 2020 IEEE

    Describing Digital IT Consulting Services:The DITCOS Ontology Proposal and its Evaluation

    Get PDF
    The digital transformation of the consulting sector has recently gained momentum due to the Covid-19 pandemic. In particular, the areas of financial and insurance services are receiving strong attention from digitization researchers. However, the field of IT consulting itself evaded the attention of scientists. Moreover, despite the heavy use of digital technologies such as on-line conferencing and digital collaboration, the actual consulting process itself has hardly changed. This indicates the weaknesses of IT consulting as a field in establishing true digital business models and consulting service delivery processes. The present paper makes a twofold contribution to the domain of digitization of the IT consulting domain. First, it introduces the DITCOS-O ontology for semantic description of digital IT consulting services. Second, the DITCOS-DN description notation is derived from DITCOS-O, as a new approach to ontology-based definition of domain specific languages. Then, DITCOS-DN is used to describe different real-world services. The result is the analysis of the coverage of real-world service and the comprehensibility of their digitally described service model representations with the help of IT consulting practitioners.</p

    From an Ontology for Programming to a Type-Safe Template Language

    Get PDF
    The demand to develop more applications in a faster way has been increasing over the years. Even non-experienced developers are jumping into the market thanks to low-code platforms such as OutSystems. The main goal of the GOLEM project is the development of the next generation of lowcode software development, aiming to automate programming and make the OutSystems platform easier to use. This work is integrated into the GOLEM project and focuses 1) on designing an ontology that will be used to capture concepts from a user dialogue; 2) on formalizing a template language; and 3) producing a reference implementation for OSTRICH. The ontology conceptsmust be representative enough to allow the generation of an application. A domain-specific language (DSL) produced in the scope of the GOLEM project will analyse the captured concepts, generating a set of operations that incrementally build and modify the target application. Because some of those application components are common patterns, they can be preassembled into templates to be later re-used. OSTRICH, a type-safe template language for the OutSystems platform, allows for the definition and instantiation of type-safe templates while ensuring a clear separation between compile-time and runtime computations. We formalize this two-stage language, defining its syntax, type system and operational semantics. We also produce a reference implementation and introduce new features: parametric polymorphism and a simplified form of type dependency. Such features enable instantiating the top ten most-commonly used OutSystems templates, which are more than half of all template instantiations in the platform. These templates ease and fasten the development process, reducing the knowledge required to build OutSystems’ applications.A necessidade de desenvolver aplicações a um ritmo cada vez mais acelerado tem aumentado ao longo dos anos. Mesmo programadores sem experiência têm vindo a integrar o mercado de trabalho nesta área, graças a plataformas low-code como a OutSystems. O projeto GOLEM tem como objetivo o desenvolvimento da próxima geração de lowcode, visando automatizar a programação e facilitar o uso da plataforma OutSystems. O objetivo desta dissertação, parte do projeto GOLEM, é 1) desenvolver uma ontologia para captar conceitos de um diálogo com o utilizador; 2) formalizar uma linguagem de templates; e 3) desenvolver uma implementação referência para o OSTRICH. Os conceitos da ontologia devem ser suficientemente representativos de forma a permitir a criação de uma aplicação. Uma linguagem de domínio específico (DSL) criada no escopo do projeto GOLEM irá analisar os conceitos captados e gera um conjunto de operações que constroem e modificam a aplicação-alvo incrementalmente. Visto alguns desses componentes da aplicação corresponderem a padrões comuns, estes podem ser previamente agregados num template para que possam ser reutilizados posteriormente. OSTRICH, uma linguagem de templates com segurança de tipos da plataforma OutSystems, permite definir e instanciar templates que respeitam as restrições de tipos, garantindo uma separação clara entre computações que ocorram em tempo de compilação e de execução. Nós formalizamos esta linguagem de duas etapas, definindo a sua sintaxe, sistema de tipos, e semântica operacional. Também desenvolvemos uma implementação de referência e introduzimos novas funcionalidades: polimorfismo paramétrico, e uma forma simplificada de dependência entre tipos. Estas funcionalidades permitem instanciar os dez templates OutSystems mais usados, correspondendo a mais do que metade das instanciações de templates na plataforma. Estes templates facilitam e aceleram o processo de desenvolvimento, reduzindo o conhecimento necessário exigido ao programador para a construção de aplicações Outsystems

    Formal description and automatic generation of learning spaces based on ontologies

    Get PDF
    Tese de Doutoramento em InformaticsA good Learning Space (LS) should convey pertinent information to the visitors at the most adequate time and location to favor their knowledge acquisition. This statement justifies the relevance of virtual Learning Spaces. Considering the consolidation of the Internet and the improvement of the interaction, searching, and learning mechanisms, this work proposes a generic architecture, called CaVa, to create Virtual Learning Spaces building upon cultural institution documents. More precisely, the proposal is to automatically generate ontology-based virtual learning environments from document repositories. Thus, to impart relevant learning materials to the virtual LS, this proposal is based on using ontologies to represent the fundamental concepts and semantic relations in a user- and machine-understandable format. These concepts together with the data (extracted from the real documents) stored in a digital repository are displayed in a web-based LS that enables the visitors to use the available features and tools to learn about a specific domain. According to the approach here discussed, each desired virtual LS must be specified rigorously through a Domain-Specific Language (DSL), called CaVaDSL, designed and implemented in this work. Furthermore, a set of processors (generators) was developed. These generators have the duty, receiving a CaVaDSL specification as input, of transforming it into several web scripts to be recognized and rendered by a web browser, producing the final virtual LS. Aiming at validating the proposed architecture, three real case studies – (1) Emigration Documents belonging to Fafe’s Archive; (2) The prosopographical repository of the Fasti Ecclesiae Portugaliae project; and (3) Collection of life stories of the Museum of the Person – were used. These real scenarios are actually relevant as they promote the digital preservation and dissemination of Cultural Heritage, contributing to human welfare.Um bom Espaço de Aprendizagem (LS – Learning Space) deve transmitir informações pertinentes aos visitantes no horário e local mais adequados para favorecer a aquisição de conhecimento. Esta afirmação justifica a relevância dos Espaços virtuais de Aprendizagem. Considerando a consolidação da Internet e o aprimoramento dos mecanismos de interação, busca e aprendizagem, este trabalho propõe uma arquitetura genérica, denominada CaVa, para a criação de Espaços virtuais de Aprendizagem baseados em documentos de instituições culturais. Mais precisamente, a proposta é gerar automaticamente ambientes de aprendizagem virtual baseados em ontologias a partir de repositórios de documentos. Assim, para transmitir materiais de aprendizagem relevantes para o LS virtual, esta proposta é baseada no uso de ontologias para representar os conceitos fundamentais e as relações semânticas em um formato compreensível pelo usuário e pela máquina. Esses conceitos, juntamente com os dados (extraídos dos documentos reais) armazenados em um repositório digital, são exibidos em um LS baseado na web que permite aos visitantes usarem os recursos e ferramentas disponíveis para aprenderem sobre um domínio espec ífico. Cada LS virtual desejado deve ser especificado rigorosamente por meio de uma Linguagem de Domínio Específico (DSL), chamada CaVaDSL, projetada e implementada neste trabalho. Além disso, um conjunto de processadores (geradores) foi desenvolvido. Esses geradores têm o dever de receber uma especificação CaVaDSL como entrada e transformá-la em diversos web scripts para serem reconhecidos e renderizados por um navegador, produzindo o LS virtual final. Visando validar a arquitetura proposta, três estudos de caso reais foram usados. Esses cenários reais são realmente relevantes, pois promovem a preservação digital e a disseminação do Património Cultural, contribuindo para o bem-estar humano

    Design Approach to Unified Service API Modeling for Semantic Interoperability of Cross-enterprise Vehicle Applications

    Get PDF
    This work was partially supported by Ministry of Education, Youth and Sports of the Czech Republic, university specific research, project SGS-2019-018 Processing of heterogeneous data and its specialized applications

    Framework for Composition of Domain Specific Languages and the Effect of Composition on Re-use of Translation Rules

    Get PDF
    DSLs are programming languages that have been designed to be used to solve problems in a specific domain. They provide constructs that are high-level and domain-specific to make it easier to implement solutions in the given domain. They frequently also limit the language to the domain, avoiding general purpose constructs. One of the main reasons for using a DSL is to reduce the amount of work required for implementing new programs. To make the use of DSLs feasible, the cost of developing a new DSL for a domain has to be less than the total amount of cost saved by having the DSL. Thus, reducing the cost of developing new DSLs means that introducing DSLs becomes feasible in more situations. One way of reducing costs is to use composition techniques, where new languages are created from existing ones. This includes defining new language constructs in terms of existing ones, combining the constructs from one or more existing languages, and redefining existing constructs. We present a framework for composing languages on the abstract level and discuss to which degree one can ensure that languages produced by the composition language are valid. In particular, we look at how translation rules for translating from a composed language to a GPL are affected by the composition. That is, to which degree can a language composed from other languages reuse the translation rules of the languages it is composed from. We use a patience game suite as a case-study to show how our composition techniques can be used and demonstrate the short-comings of the techniques. We also show how a tool for composing languages can be created using DSLs produced by composition. The implementations are all in Java
    corecore