1,583 research outputs found

    Ontology population for open-source intelligence: A GATE-based solution

    Get PDF
    Open-Source INTelligence is intelligence based on publicly available sources such as news sites, blogs, forums, etc. The Web is the primary source of information, but once data are crawled, they need to be interpreted and structured. Ontologies may play a crucial role in this process, but because of the vast amount of documents available, automatic mechanisms for their population are needed, starting from the crawled text. This paper presents an approach for the automatic population of predefined ontologies with data extracted from text and discusses the design and realization of a pipeline based on the General Architecture for Text Engineering system, which is interesting for both researchers and practitioners in the field. Some experimental results that are encouraging in terms of extracted correct instances of the ontology are also reported. Furthermore, the paper also describes an alternative approach and provides additional experiments for one of the phases of our pipeline, which requires the use of predefined dictionaries for relevant entities. Through such a variant, the manual workload required in this phase was reduced, still obtaining promising results

    Holistic recommender systems for software engineering

    Get PDF
    The knowledge possessed by developers is often not sufficient to overcome a programming problem. Short of talking to teammates, when available, developers often gather additional knowledge from development artifacts (e.g., project documentation), as well as online resources. The web has become an essential component in the modern developer’s daily life, providing a plethora of information from sources like forums, tutorials, Q&A websites, API documentation, and even video tutorials. Recommender Systems for Software Engineering (RSSE) provide developers with assistance to navigate the information space, automatically suggest useful items, and reduce the time required to locate the needed information. Current RSSEs consider development artifacts as containers of homogeneous information in form of pure text. However, text is a means to represent heterogeneous information provided by, for example, natural language, source code, interchange formats (e.g., XML, JSON), and stack traces. Interpreting the information from a pure textual point of view misses the intrinsic heterogeneity of the artifacts, thus leading to a reductionist approach. We propose the concept of Holistic Recommender Systems for Software Engineering (H-RSSE), i.e., RSSEs that go beyond the textual interpretation of the information contained in development artifacts. Our thesis is that modeling and aggregating information in a holistic fashion enables novel and advanced analyses of development artifacts. To validate our thesis we developed a framework to extract, model and analyze information contained in development artifacts in a reusable meta- information model. We show how RSSEs benefit from a meta-information model, since it enables customized and novel analyses built on top of our framework. The information can be thus reinterpreted from an holistic point of view, preserving its multi-dimensionality, and opening the path towards the concept of holistic recommender systems for software engineering

    A comparison of traditional test blueprinting to assessment engineering in a large scale assessment context

    Get PDF
    This dissertation investigates the plausibility of computing Assessment Engineering cognitive task model derived difficulty parameters through careful engineering design, and to compare the task model derived difficulty with empirical Rasch model ‘b’ parameter estimates. In addition, this research seeks to examine whether cognitive task model derived difficulty can replace the Rasch Model ‘b’ parameter estimates for scoring examinees. The study uses real data constituting four assessments from a large-scale testing company. The results of the analysis indicated strong correlations between the task model and the empirical difficulty parameter estimates. While most of the empirical items satisfied the standard requirements of fit, there were several misfitting task model items, however, the task model was able to provide adequate fit for most of the items. Furthermore the proficiency scores for the empirical and the task model matched each other quite well for all of the assessments, showing no differences among the empirical and task model scores. An examination of the standard error statistics showed no differences between the empirical Rasch model and the cognitive task models. Assessment engineering is a new field, therefore very little research exists on comparing assessment engineering cognitive task model derived difficulties to empirical Rasch model parameter estimates. Moreover, the effects of cognitive task model estimate on proficiency scores has not been investigated. This study showed that through assessment engineering cognitive task modelling design process, it is possible to generate the item difficulty parameters a priori, without the use of any complex data hungry statistical models. For large scale testing companies, this will significantly reduce cost for pilot testing and make available hundreds of items that operate in a psychometrically similar manner. This design process produces difficulty parameters that operate in a similar manner to the statistical difficulty parameters computed in traditional ways using the Rasch model

    Model Transformation Languages with Modular Information Hiding

    Get PDF
    Model transformations, together with models, form the principal artifacts in model-driven software development. Industrial practitioners report that transformations on larger models quickly get sufficiently large and complex themselves. To alleviate entailed maintenance efforts, this thesis presents a modularity concept with explicit interfaces, complemented by software visualization and clustering techniques. All three approaches are tailored to the specific needs of the transformation domain

    Action semantics of unified modeling language

    Get PDF
    The Uni ed Modeling Language or UML, as a visual and general purpose modeling language, has been around for more than a decade, gaining increasingly wide application and becoming the de-facto industrial standard for modeling software systems. However, the dynamic semantics of UML behaviours are only described in natural languages. Speci cation in natural languages inevitably involves vagueness, lacks reasonability and discourages mechanical language implementation. Such semi-formality of UML causes wide concern for researchers, including us. The formal semantics of UML demands more readability and extensibility due to its fast evolution and a wider range of users. Therefore we adopt Action Semantics (AS), mainly created by Peter Mosses, to formalize the dynamic semantics of UML, because AS can satisfy these needs advantageously compared to other frameworks. Instead of de ning UML directly, we design an action language, called ALx, and use it as the intermediary between a typical executable UML and its action semantics. ALx is highly heterogeneous, combining the features of Object Oriented Programming Languages, Object Query Languages, Model Description Languages and more complex behaviours like state machines. Adopting AS to formalize such a heterogeneous language is in turn of signi cance in exploring the adequacy and applicability of AS. In order to give assurance of the validity of the action semantics of ALx, a prototype ALx-to-Java translator is implemented, underpinned by our formal semantic description of the action language and using the Model Driven Approach (MDA). We argue that MDA is a feasible way of implementing this source-to-source language translator because the cornerstone of MDA, UML, is adequate to specify the static aspect of programming languages, and MDA provides executable transformation languages to model mapping rules between languages. We also construct a translator using a commonly-used conventional approach, in i which a tool is employed to generate the lexical scanner and the parser, and then other components including the type checker, symbol table constructor, intermediate representation producer and code generator, are coded manually. Then we compare the conventional approach with the MDA. The result shows that MDA has advantages over the conventional method in the aspect of code quality but is inferior to the latter in terms of system performance

    EG-ICE 2021 Workshop on Intelligent Computing in Engineering

    Get PDF
    The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways
    • …
    corecore