79,113 research outputs found

    Example-based Validation of Domain-Specific Visual Languages

    Full text link
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in SLE 2015: Proceedings of the 2015 ACM SIGPLAN International Conference on Software Language Engineering, http://dx.doi.org/10.1145/10.1145/2814251.2814256The definition of Domain-Specific Languages (DSLs) is a recurrent activity in Model-Driven Engineering. However, their construction is many times an ad-hoc process, partly due to the lack of tools enabling a proper engineering of DSLs and promoting domain experts to play an active role. The focus of this paper is on the validation of meta- models for visual DSLs. For this purpose, we propose a language and tool support for describing properties that in- stances of meta-models should (or should not) meet. Then, our system uses a model finder to produce example models, enriched with a graphical concrete syntax, that confirm or refute the assumptions of the meta-model developer. Our language complements metaBest, a framework for the validation and verification of meta-models that includes two other languages for unit testing and specification-based test- ing of meta-models. A salient feature of our approach is that it fosters interaction with domain experts by the use, process- ing and creation of informal drawings constructed in editors liked yED or Dia. We assess the usefulness of the approach in the validation of a DSL for house blueprints, with the par- ticipation of 26 4th year computer science students.Work supported by the Spanish MINECO (TIN2011-24139 and TIN2014-52129-R), the R&D programme of the Madrid Region (S2013/ICE-3006), and the EU commission (FP7-ICT-2013-10, #611125)

    Example-driven meta-model development

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10270-013-0392-yThe intensive use of models in model-driven engineering (MDE) raises the need to develop meta-models with different aims, such as the construction of textual and visual modelling languages and the specification of source and target ends of model-to-model transformations. While domain experts have the knowledge about the concepts of the domain, they usually lack the skills to build meta-models. Moreover, meta-models typically need to be tailored according to their future usage and specific implementation platform, which demands knowledge available only to engineers with great expertise in specific MDE platforms. These issues hinder a wider adoption of MDE both by domain experts and software engineers. In order to alleviate this situation, we propose an interactive, iterative approach to meta-model construction, enabling the specification of example model fragments by domain experts, with the possibility of using informal drawing tools like Dia or yED. These fragments can be annotated with hints about the intention or needs for certain elements. A meta-model is then automatically induced, which can be refactored in an interactive way, and then compiled into an implementation meta-model using profiles and patterns for different platforms and purposes. Our approach includes the use of a virtual assistant, which provides suggestions for improving the meta-model based on well-known refactorings, and a validation mode, enabling the validation of the meta-model by means of examples.This work has been funded by the Spanish Ministry of Economy and Competitivity with project “Go Lite” (TIN2011-24139), and by the R&D programme of Madrid Region with project “eMadrid” (S2009/TIC-1650)

    Visual Affect Around the World: A Large-scale Multilingual Visual Sentiment Ontology

    Get PDF
    Every culture and language is unique. Our work expressly focuses on the uniqueness of culture and language in relation to human affect, specifically sentiment and emotion semantics, and how they manifest in social multimedia. We develop sets of sentiment- and emotion-polarized visual concepts by adapting semantic structures called adjective-noun pairs, originally introduced by Borth et al. (2013), but in a multilingual context. We propose a new language-dependent method for automatic discovery of these adjective-noun constructs. We show how this pipeline can be applied on a social multimedia platform for the creation of a large-scale multilingual visual sentiment concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our unified ontology is organized hierarchically by multilingual clusters of visually detectable nouns and subclusters of emotionally biased versions of these nouns. In addition, we present an image-based prediction task to show how generalizable language-specific models are in a multilingual context. A new, publicly available dataset of >15.6K sentiment-biased visual concepts across 12 languages with language-specific detector banks, >7.36M images and their metadata is also released.Comment: 11 pages, to appear at ACM MM'1

    Drawing OWL 2 ontologies with Eddy the editor

    Get PDF
    In this paper we introduce Eddy, a new open-source tool for the graphical editing of OWL~2 ontologies. Eddy is specifically designed for creating ontologies in Graphol, a completely visual ontology language that is equivalent to OWL~2. Thus, in Eddy ontologies are easily drawn as diagrams, rather than written as sets of formulas, as commonly happens in popular ontology design and engineering environments. This makes Eddy particularly suited for usage by people who are more familiar with diagramatic languages for conceptual modeling rather than with typical ontology formalisms, as is often required in non-academic and industrial contexts. Eddy provides intuitive functionalities for specifying Graphol diagrams, guarantees their syntactic correctness, and allows for exporting them in standard OWL 2 syntax. A user evaluation study we conducted shows that Eddy is perceived as an easy and intuitive tool for ontology specification

    The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision

    Full text link
    We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu

    An investigation into the validation of formalised cognitive dimensions

    Get PDF
    The cognitive dimensions framework is a conceptual framework aimed at characterising features of interactive systems that are strongly influential upon their effective use. As such the framework facilitates the critical assessment and design of a wide variety of information artifacts. Although the framework has proved to be of considerable interest to researchers and practitioners, there has been little research examining how easily the dimensions used by it can be consistently applied. The work reported in this paper addresses this problem by examining an approach to the systematic application of dimensions and assessing its success empirically. The findings demonstrate a relatively successful approach to validating the systematic application of some concepts found in the cognitive dimensions framework.</p
    • …
    corecore