875 research outputs found
A Vision for Flexibile GLSP-based Web Modeling Tools
In the past decade, the modeling community has produced many feature-rich
modeling editors and tool prototypes not only for modeling standards but
particularly also for many domain-specific languages. More recently, however,
web-based modeling tools have started to become increasingly popular for
visualizing and editing models adhering to such languages in the industry. This
new generation of modeling tools is built with web technologies and offers much
more flexibility when it comes to their user experience, accessibility, reuse,
and deployment options. One of the technologies behind this new generation of
tools is the Graphical Language Server Platform (GLSP), an open-source
client-server framework hosted under the Eclipse foundation, which allows tool
providers to build modern diagram editors for modeling tools that run in the
browser or can be easily integrated into IDEs such as Eclipse, VS Code, or
Eclipse Theia. In this paper, we describe our vision of more flexible modeling
tools which is based on our experiences from developing several GLSP-based
modeling tools. With that, we aim at sparking a new line of research and
innovation in the modeling community for modeling tool development practices
and to explore opportunities, advantages, or limitations of web-based modeling
tools, as well as bridge the gap between scientific tool prototypes and
industrial tools being used in practice.Comment: 8 pages, 5 figure
Towards a Universal Variability Language: Master's Thesis
While feature diagrams have become the de facto standard to graphically describe variability models in Software Product Line Engineering (SPLE), none of the many textual notations have gained widespread adoption. However, a common textual language would be beneficial for better collaboration and exchange between tools. The main goal of this thesis is to propose a language for this purpose, along with fundamental tool support. The language should meet the needs and preferences of the community, so it can attain acceptance and adoption, without becoming yet another variability language. Its guiding principles are simplicity, familiarity, and flexibility. These enable the language to be easy to learn and to integrate into different tools, while still being expressive enough to represent existing and future models. We incorporate general design principles for Domain-Specific Languages (DSLs), discuss usage scenarios collected by the community, analyze existing languages, and gather feedback directly through questionnaires submitted to the community. In the initial questionnaire, the community was in disagreement on whether to use nesting or references to represent the hierarchy. Thus, we presented two proposals to be compared side by side. Of those, the community clearly prefers the one using nesting, as determined by a second questionnaire. We call that proposal the Universal Variability Language (UVL). The community awards good ratings to this language, deems it suitable for teaching and learning, and estimates that it can represent most of the existing models. Evaluations reconsidering the requirements show that it enables the relevant scenarios and can support the editing of large-scale real-world feature models, such as the Linux kernel. We provide a small default library that can be used in Java, containing a parser and a printer for the language. We integrated it into the variability tool FeatureIDE, demonstrating its utility in quickly adding support for the proposed language. Overall, we can conclude that UVL is well-suited for a base language level of a universal textual variability language. Along with the acquired insights into the requirements for such a language, it can pose as the basis for the SPLE community to commit to a common language. As exchange and collaboration would be simplified, higher-quality research could be conducted and better tools developed, serving the whole community
Ontology-driven document enrichment: principles, tools and applications
In this paper, we present an approach to document enrichment, which consists of developing and integrating formal knowledge models with archives of documents, to provide intelligent knowledge retrieval and (possibly) additional knowledge-intensive services, beyond what is currently available using “standard” information retrieval and search facilities. Our approach is ontology-driven, in the sense that the construction of the knowledge model is carried out in a top-down fashion, by populating a given ontology, rather than in a bottom-up fashion, by annotating a particular document. In this paper, we give an overview of the approach and we examine the various types of issues (e.g. modelling, organizational and user interface issues) which need to be tackled to effectively deploy our approach in the workplace. In addition, we also discuss a number of technologies we have developed to support ontology-driven document enrichment and we illustrate our ideas in the domains of electronic news publishing, scholarly discourse and medical guidelines
Bacatá:Notebooks for DSLs, Almost for Free
Context: Computational notebooks are a contemporary style of literate
programming, in which users can communicate and transfer knowledge by
interleaving executable code, output, and prose in a single rich document. A
Domain-Specific Language (DSL) is an artificial software language tailored for
a particular application domain. Usually, DSL users are domain experts that may
not have a software engineering background. As a consequence, they might not be
familiar with Integrated Development Environments (IDEs). Thus, the development
of tools that offer different interfaces for interacting with a DSL is
relevant.
Inquiry: However, resources available to DSL designers are limited. We would
like to leverage tools used to interact with general purpose languages in the
context of DSLs. Computational notebooks are an example of such tools. Then,
our main question is: What is an efficient and effective method of designing
and implementing notebook interfaces for DSLs? By addressing this question we
might be able to speed up the development of DSL tools, and ease the
interaction between end-users and DSLs.
Approach: In this paper, we present Bacat\'a, a mechanism for generating
notebook interfaces for DSLs in a language parametric fashion. We designed this
mechanism in a way in which language engineers can reuse as many language
components (e.g., language processors, type checkers, code generators) as
possible.
Knowledge: Our results show that notebook interfaces generated by Bacat\'a
can be automatically generated with little manual configuration. There are few
considerations and caveats that should be addressed by language engineers that
rely on language design aspects. The creation of a notebook for a DSL with
Bacat\'a becomes a matter of writing the code that wires existing language
components in the Rascal language workbench with the Jupyter platform.
Grounding: We evaluate Bacat\'a by generating functional computational
notebook interfaces for three different non-trivial DSLs, namely: a small
subset of Halide (a DSL for digital image processing), SweeterJS (an extended
version of JavaScript), and QL (a DSL for questionnaires). Additionally, it is
relevant to generate notebook implementations rather than implementing them
manually. We measured and compared the number of Source Lines of Code (SLOCs)
that we reused from existing implementations of those languages.
Importance: The adoption of notebooks by novice-programmers and end-users has
made them very popular in several domains such as exploratory programming, data
science, data journalism, and machine learning. Why are they popular? In (data)
science, it is essential to make results reproducible as well as
understandable. However, notebooks are only available for GPLs. This paper
opens up the notebook metaphor for DSLs to improve the end-user experience when
interacting with code and to increase DSLs adoption
Open Personalization: Involving Third Parties in Improving the User Experience of Websites
Traditional software development captures the user needs during the
requirement analysis. The Web makes this endeavour even harder due to
the difficulty to determine who these users are. In an attempt to tackle
the heterogeneity of the user base, Web Personalization techniques are
proposed to guide the users’ experience. In addition, Open Innovation
allows organisations to look beyond their internal resources to develop new
products or improve existing processes.
This thesis sits in between by introducing Open Personalization as a
means to incorporate actors other than webmasters in the personalization
of web applications. The aim is to provide the technological basis that
builds up a trusty environment for webmasters and companion actors to
collaborate, i.e. "an architecture of participation". Such architecture
very much depends on these actors’ profile. This work tackles three
profiles (i.e. software partners, hobby programmers and end users), and
proposes three "architectures of participation" tuned for each profile. Each
architecture rests on different technologies: a .NET annotation library
based on Inversion of Control for software partners, a Modding Interface in
JavaScript for hobby programmers, and finally, a domain specific language
for end-users. Proof-of-concept implementations are available for the three
cases while a quantitative evaluation is conducted for the domain specific
language
- …