324 research outputs found

    An approach to build JSON-based Domain Specific Languages solutions for web applications

    Full text link
    Because of their level of abstraction, Domain-Specific Languages (DSLs) enable building applications that ease software implementation. In the context of web applications, we can find a lot of technologies and programming languages for server-side applications that provide fast, robust, and flexible solutions, whereas those for client-side applications are limited, and mostly restricted to directly use JavaScript, HTML5, CSS3, JSON and XML. This article presents a novel approach to creating DSL-based web applications using JSON grammar (JSON-DSL) for both, the server and client side. The approach includes an evaluation engine, a programming model and an integrated web development environment that support it. The evaluation engine allows the execution of the elements created with the programming model. For its part, the programming model allows the definition and specification of JSON-DSLs, the implementation of JavaScript components, the use of JavaScript templates provided by the engine, the use of link connectors to heterogeneous information sources, and the integration with other widgets, web components and JavaScript frameworks. To validate the strength and capacity of our approach, we have developed four case studies that use the integrated web development environment to apply the programming model and check the results within the evaluation engin

    Projectional Editors for JSON-Based DSLs

    Full text link
    Augmenting text-based programming with rich structured interactions has been explored in many ways. Among these, projectional editors offer an enticing combination of structure editing and domain-specific program visualization. Yet such tools are typically bespoke and expensive to produce, leaving them inaccessible to many DSL and application designers. We describe a relatively inexpensive way to build rich projectional editors for a large class of DSLs -- namely, those defined using JSON. Given any such JSON-based DSL, we derive a projectional editor through (i) a language-agnostic mapping from JSON Schemas to structure-editor GUIs and (ii) an API for application designers to implement custom views for the domain-specific types described in a schema. We implement these ideas in a prototype, Prong, which we illustrate with several examples including the Vega and Vega-Lite data visualization DSLs.Comment: To appear at VL/HCC 202

    Deployment and Operation of Complex Software in Heterogeneous Execution Environments

    Get PDF
    This open access book provides an overview of the work developed within the SODALITE project, which aims at facilitating the deployment and operation of distributed software on top of heterogeneous infrastructures, including cloud, HPC and edge resources. The experts participating in the project describe how SODALITE works and how it can be exploited by end users. While multiple languages and tools are available in the literature to support DevOps teams in the automation of deployment and operation steps, still these activities require specific know-how and skills that cannot be found in average teams. The SODALITE framework tackles this problem by offering modelling and smart editing features to allow those we call Application Ops Experts to work without knowing low level details about the adopted, potentially heterogeneous, infrastructures. The framework offers also mechanisms to verify the quality of the defined models, generate the corresponding executable infrastructural code, automatically wrap application components within proper execution containers, orchestrate all activities concerned with deployment and operation of all system components, and support on-the-fly self-adaptation and refactoring

    Goal-based composition of scalable hybrid analytics for heterogeneous architectures

    Get PDF
    Crafting scalable analytics in order to extract actionable business intelligence is a challenging endeavour, requiring multiple layers of expertise and experience. Often, this expertise is irreconcilably split between an organisation’s engineers and subject matter domain experts. Previous approaches to this problem have relied on technically adept users with tool-specific training. Such an approach has a number of challenges: Expertise — There are few data-analytic subject domain experts with in-depth technical knowledge of compute architectures; Performance — Analysts do not generally make full use of the performance and scalability capabilities of the underlying architectures; Heterogeneity — calculating the most performant and scalable mix of real-time (on-line) and batch (off-line) analytics in a problem domain is difficult; Tools — Supporting frameworks will often direct several tasks, including, composition, planning, code generation, validation, performance tuning and analysis, but do not typically provide end-to-end solutions embedding all of these activities. In this paper, we present a novel semi-automated approach to the composition, planning, code generation and performance tuning of scalable hybrid analytics, using a semantically rich type system which requires little programming expertise from the user. This approach is the first of its kind to permit domain experts with little or no technical expertise to assemble complex and scalable analytics, for hybrid on- and off-line analytic environments, with no additional requirement for low-level engineering support. This paper describes (i) an abstract model of analytic assembly and execution, (ii) goal-based planning and (iii) code generation for hybrid on- and off-line analytics. An implementation, through a system which we call Mendeleev, is used to (iv) demonstrate the applicability of this technique through a series of case studies, where a single interface is used to create analytics that can be run simultaneously over on- and off-line environments. Finally, we (v) analyse the performance of the planner, and (vi) show that the performance of Mendeleev’s generated code is comparable with that of hand-written analytics

    Formal description and automatic generation of learning spaces based on ontologies

    Get PDF
    Tese de Doutoramento em InformaticsA good Learning Space (LS) should convey pertinent information to the visitors at the most adequate time and location to favor their knowledge acquisition. This statement justifies the relevance of virtual Learning Spaces. Considering the consolidation of the Internet and the improvement of the interaction, searching, and learning mechanisms, this work proposes a generic architecture, called CaVa, to create Virtual Learning Spaces building upon cultural institution documents. More precisely, the proposal is to automatically generate ontology-based virtual learning environments from document repositories. Thus, to impart relevant learning materials to the virtual LS, this proposal is based on using ontologies to represent the fundamental concepts and semantic relations in a user- and machine-understandable format. These concepts together with the data (extracted from the real documents) stored in a digital repository are displayed in a web-based LS that enables the visitors to use the available features and tools to learn about a specific domain. According to the approach here discussed, each desired virtual LS must be specified rigorously through a Domain-Specific Language (DSL), called CaVaDSL, designed and implemented in this work. Furthermore, a set of processors (generators) was developed. These generators have the duty, receiving a CaVaDSL specification as input, of transforming it into several web scripts to be recognized and rendered by a web browser, producing the final virtual LS. Aiming at validating the proposed architecture, three real case studies – (1) Emigration Documents belonging to Fafe’s Archive; (2) The prosopographical repository of the Fasti Ecclesiae Portugaliae project; and (3) Collection of life stories of the Museum of the Person – were used. These real scenarios are actually relevant as they promote the digital preservation and dissemination of Cultural Heritage, contributing to human welfare.Um bom Espaço de Aprendizagem (LS – Learning Space) deve transmitir informações pertinentes aos visitantes no horário e local mais adequados para favorecer a aquisição de conhecimento. Esta afirmação justifica a relevância dos Espaços virtuais de Aprendizagem. Considerando a consolidação da Internet e o aprimoramento dos mecanismos de interação, busca e aprendizagem, este trabalho propõe uma arquitetura genérica, denominada CaVa, para a criação de Espaços virtuais de Aprendizagem baseados em documentos de instituições culturais. Mais precisamente, a proposta é gerar automaticamente ambientes de aprendizagem virtual baseados em ontologias a partir de repositórios de documentos. Assim, para transmitir materiais de aprendizagem relevantes para o LS virtual, esta proposta é baseada no uso de ontologias para representar os conceitos fundamentais e as relações semânticas em um formato compreensível pelo usuário e pela máquina. Esses conceitos, juntamente com os dados (extraídos dos documentos reais) armazenados em um repositório digital, são exibidos em um LS baseado na web que permite aos visitantes usarem os recursos e ferramentas disponíveis para aprenderem sobre um domínio espec ífico. Cada LS virtual desejado deve ser especificado rigorosamente por meio de uma Linguagem de Domínio Específico (DSL), chamada CaVaDSL, projetada e implementada neste trabalho. Além disso, um conjunto de processadores (geradores) foi desenvolvido. Esses geradores têm o dever de receber uma especificação CaVaDSL como entrada e transformá-la em diversos web scripts para serem reconhecidos e renderizados por um navegador, produzindo o LS virtual final. Visando validar a arquitetura proposta, três estudos de caso reais foram usados. Esses cenários reais são realmente relevantes, pois promovem a preservação digital e a disseminação do Património Cultural, contribuindo para o bem-estar humano

    COMPUTING FEEDBACK FOR CITIZENS’ PROPOSALS IN PARTICIPATIVE URBAN PLANNING

    Get PDF
    We show an approach how to provide computed feedback on citizens’ proposals based on open data and expert knowledge in urban planning and public participation by using Domain-Specific Languages (DSL). We outline the process involving different stakeholders of engineering such a DSL and provide an architecture capable of executing the language and uploading new scripts at runtime. A real-world example of the city of Hamburg is used to show the principles and serves as input for development. A prototype has been implemented and evaluated at various events involving citizen and city representatives. We conclude that DSLs can be successfully applied to enable a new way to access data in a more convenient and understandable form, abstracting from technical details and focusing on domain aspects

    Insights from an OTTR-centric Ontology Engineering Methodology

    Full text link
    OTTR is a language for representing ontology modeling patterns, which enables to build ontologies or knowledge bases by instantiating templates. Thereby, particularities of the ontological representation language are hidden from the domain experts, and it enables ontology engineers to, to some extent, separate the processes of deciding about what information to model from deciding about how to model the information, e.g., which design patterns to use. Certain decisions can thus be postponed for the benefit of focusing on one of these processes. To date, only few works on ontology engineering where ontology templates are applied are described in the literature. In this paper, we outline our methodology and report findings from our ontology engineering activities in the domain of Material Science. In these activities, OTTR templates play a key role. Our ontology engineering process is bottom-up, as we begin modeling activities from existing data that is then, via templates, fed into a knowledge graph, and it is top-down, as we first focus on which data to model and postpone the decision of how to model the data. We find, among other things, that OTTR templates are especially useful as a means of communication with domain experts. Furthermore, we find that because OTTR templates encapsulate modeling decisions, the engineering process becomes flexible, meaning that design decisions can be changed at little cost.Comment: Paper accepted at the 14th Workshop on Ontology Design and Patterns (WOP 2023

    GarmentCode: Programming Parametric Sewing Patterns

    Full text link
    Garment modeling is an essential task of the global apparel industry and a core part of digital human modeling. Realistic representation of garments with valid sewing patterns is key to their accurate digital simulation and eventual fabrication. However, little-to-no computational tools provide support for bridging the gap between high-level construction goals and low-level editing of pattern geometry, e.g., combining or switching garment elements, semantic editing, or design exploration that maintains the validity of a sewing pattern. We suggest the first DSL for garment modeling -- GarmentCode -- that applies principles of object-oriented programming to garment construction and allows designing sewing patterns in a hierarchical, component-oriented manner. The programming-based paradigm naturally provides unique advantages of component abstraction, algorithmic manipulation, and free-form design parametrization. We additionally support the construction process by automating typical low-level tasks like placing a dart at a desired location. In our prototype garment configurator, users can manipulate meaningful design parameters and body measurements, while the construction of pattern geometry is handled by garment programs implemented with GarmentCode. Our configurator enables the free exploration of rich design spaces and the creation of garments using interchangeable, parameterized components. We showcase our approach by producing a variety of garment designs and retargeting them to different body shapes using our configurator.Comment: Supplementary video: https://youtu.be/16Yyr2G9_6E

    Pristup integraciji tehničkih prostora zasnovan na preslikavanjima iinženjerstvu vođenom modelima

    Get PDF
    In order to automate development of integration adapters in industrial settings, a model-driven approach to adapter specification is devised. In this approach, a domain-specific modeling language is created to allow specification of mappings between integrated technical spaces. Also proposed is the mapping automation engine that comprises reuse and alignment algorithms. Based on mapping specifications, executable adapters are automatically generated and executed. Results of approach evaluations indicate that it is possible to use a model-driven approach to successfully integrate technical spaces and increase the automation by reusing domainspecific mappings from previously created adapters.За потребе повећања степена аутоматизације развоја адаптера за интеграцију у индустријском окружењу, осмишљен је моделом вођен приступ развоју адаптера. У оквиру овог приступа развијен је наменски језик за спецификацију пресликавања између техничких простора који су предмет интеграције. Приступ обухвата и алгоритме за поравнање и поновно искориштење претходно креираних пресликавања са циљем аутоматизације процеса спецификације. На основу креираних пресликавања, могуће je аутоматски генерисати извршиви код адаптера. У испитивањима приступа, показано је да је могуће успешно применити моделом вођен приступ у интеграцији техничких простора као и да је могуће успешно повећати степен аутоматизације поновним искоришћењем претходно креираних пресликавања.Za potrebe povećanja stepena automatizacije razvoja adaptera za integraciju u industrijskom okruženju, osmišljen je modelom vođen pristup razvoju adaptera. U okviru ovog pristupa razvijen je namenski jezik za specifikaciju preslikavanja između tehničkih prostora koji su predmet integracije. Pristup obuhvata i algoritme za poravnanje i ponovno iskorištenje prethodno kreiranih preslikavanja sa ciljem automatizacije procesa specifikacije. Na osnovu kreiranih preslikavanja, moguće je automatski generisati izvršivi kod adaptera. U ispitivanjima pristupa, pokazano je da je moguće uspešno primeniti modelom vođen pristup u integraciji tehničkih prostora kao i da je moguće uspešno povećati stepen automatizacije ponovnim iskorišćenjem prethodno kreiranih preslikavanja
    corecore