19 research outputs found

    DoMoRe – A recommender system for domain modeling

    Get PDF
    Domain modeling is an important activity in early phases of software projects to achieve a shared understanding of the problem field among project participants. Domain models describe concepts and relations of respective application fields using a modeling language and domain-specific terms. Detailed knowledge of the domain as well as expertise in model-driven development is required for software engineers to create these models. This paper describes DoMoRe, a system for automated modeling recommendations to support the domain modeling process. We describe an approach in which modeling benefits from formalized knowledge sources and information extraction from text. The system incorporates a large network of semantically related terms built from natural language data sets integrated with mediator-based knowledge base querying in a single recommender system to provide context-sensitive suggestions of model elements

    Recommender systems in model-driven engineering: A systematic mapping review

    Full text link
    Recommender systems are information filtering systems used in many online applications like music and video broadcasting and e-commerce platforms. They are also increasingly being applied to facilitate software engineering activities. Following this trend, we are witnessing a growing research interest on recommendation approaches that assist with modelling tasks and model-based development processes. In this paper, we report on a systematic mapping review (based on the analysis of 66 papers) that classifies the existing research work on recommender systems for model-driven engineering (MDE). This study aims to serve as a guide for tool builders and researchers in understanding the MDE tasks that might be subject to recommendations, the applicable recommendation techniques and evaluation methods, and the open challenges and opportunities in this field of researchThis work has been funded by the European Union’s Horizon 2020 research and innovation programme under the Marie SkƂodowska-Curie Grant Agreement No. 813884 (Lowcomote [134]), by the Spanish Ministry of Science (projects MASSIVE, RTI2018-095255-B-I00, and FIT, PID2019-108965GB-I00) and by the R&D programme of Madrid (Project FORTE, P2018/TCS-431

    Automating the synthesis of recommender systems for modelling languages

    Full text link
    We are witnessing an increasing interest in building recommender systems (RSs) for all sorts of Software Engineering activities. Modelling is no exception to this trend, as modelling environments are being enriched with RSs that help building models by providing recommendations based on previous solutions to similar problems in the same domain. However, building a RS from scratch requires considerable effort and specialized knowledge. To alleviate this problem, we propose an automated approach to the generation of RSs for modelling languages. Our approach is model-based, and we provide a domain-specific language called Droid to configure every aspect of the RS (like the type and features of the recommended items, the recommendation method, and the evaluation metrics). The RS so configured can be deployed as a service, and we offer out-of-the-box integration of this service with the EMF tree editor. To assess the usefulness of our proposal, we present a case study on the integration of a generated RS with a modelling chatbot, and report on an offline experiment measuring the precision and completeness of the recommendationsThis project has received funding from the EU Horizon 2020 research and innovation programme under the Marie SkƂodowska-Curie grant agreement No 813884, the Spanish Ministry of Science (RTI2018-095255-B-I00) and the R&D programme of Madrid (P2018/TCS-4314

    Chatbots for Modelling, Modelling of Chatbots

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informåtica. Fecha de Lectura: 28-03-202

    Accountability Infrastructure: How to implement limits on platform optimization to protect population health

    Full text link
    Attention capitalism has generated design processes and product development decisions that prioritize platform growth over all other considerations. To the extent limits have been placed on these incentives, interventions have primarily taken the form of content moderation. While moderation is important for what we call "acute harms," societal-scale harms -- such as negative effects on mental health and social trust -- require new forms of institutional transparency and scientific investigation, which we group under the term accountability infrastructure. This is not a new problem. In fact, there are many conceptual lessons and implementation approaches for accountability infrastructure within the history of public health. After reviewing these insights, we reinterpret the societal harms generated by technology platforms through reference to public health. To that end, we present a novel mechanism design framework and practical measurement methods for that framework. The proposed approach is iterative and built into the product design process, and is applicable for both internally-motivated (i.e. self regulation by companies) and externally-motivated (i.e. government regulation) interventions for a range of societal problems, including mental health. We aim to help shape a research agenda of principles for the design of mechanisms around problem areas on which there is broad consensus and a firm base of support. We offer constructive examples and discussion of potential implementation methods related to these topics, as well as several new data illustrations for potential effects of exposure to online content.Comment: 63 pages, 5 tables and 6 figure

    Succeeding metadata based annotation scheme and visual tips for the automatic assessment of video aesthetic quality in car commercials

    Get PDF
    In this paper, we present a computational model capable to predict the viewer perception of car advertisements videos by using a set of low-level video descriptors. Our research goal relies on the hypothesis that these descriptors could reflect the aesthetic value of the videos and, in turn, their viewers' perception. To that effect, and as a novel approach to this problem, we automatically annotate our video corpus, downloaded from YouTube, by applying an unsupervised clustering algorithm to the retrieved metadata linked to the viewers' assessments of the videos. In this regard, a regular k-means algorithm is applied as partitioning method with k ranging from 2 to 5 clusters, modeling different satisfaction levels or classes. On the other hand, available metadata is categorized into two different types based on the profile of the viewers of the videos: metadata based on explicit and implicit opinion respectively. These two types of metadata are first individually tested and then combined together resulting in three different models or strategies that are thoroughly analyzed. Typical feature selection techniques are used over the implemented video descriptors as a pre-processing step in the classification of viewer perception, where several different classifiers have been considered as part of the experimental setup. Evaluation results show that the proposed video descriptors are clearly indicative of the subjective perception of viewers regardless of the implemented strategy and the number of classes considered. The strategy based on explicit opinion metadata clearly outperforms the implicit one in terms of classification accuracy. Finally, the combined approach slightly improves the explicit, achieving a top accuracy of 72.18% when distinguishing between 2 classes, and suggesting that better classification results could be obtained by using suitable metrics to model perception derived from all available metadata.Publicad

    Semantic Rule-based Approach for Supporting Personalised Adaptive E-Learning

    Get PDF
    Instructional designers are under increasing pressure to enhance the pedagogical quality and technical richness of their learning content offerings, while the task of authoring for such complex educational frameworks is expensive and time consuming. Personalisation and reusability of learning contents are two main factors which can be used to enhance the pedagogical impact of e-learning experiences while also optimising resources, such as the overall cost and time of designing materials for different e-learning systems. However, personalisation services require continuous fine tuning for the different features that should be used, and e-learning systems need sufficient flexibility to offer these continuously required changes. The semantic modelling of adaptable learning components can highly influence the personalisation of the learning experience and enables the reusability, adaptability and maintainability of these components. Through the discrete modelling of these components, the flexibility and extensibility of e-learning systems will be improved as learning contents can be separated from the adaptation logic which results in the learning content being no longer specific to any given adaptation rule, or instructional plan. This thesis proposes an innovative semantic rule-based approach to dynamically generate personalised learning content utilising reusable pieces of learning content. It describes an ontology-based engine that composes, at runtime, adapted learning experiences according to learner’s interaction with the system and learner’s characteristics. Additionally, enriching ontologies with semantic rules increases the reasoning power and helps to represent adaptation decisions. This novel approach aims to improve flexibility, extensibility and reusability of systems, while offering a pedagogically effective and satisfactory learning experience for learners. This thesis offers the theoretical models, design and implementation of an adaptive e-learning system in accordance with this approach. It also describes the evaluation of developed personalised adaptive e-learning system (Rule-PAdel) from pedagogical and technical perspectives

    Computational Methods for Medical and Cyber Security

    Get PDF
    Over the past decade, computational methods, including machine learning (ML) and deep learning (DL), have been exponentially growing in their development of solutions in various domains, especially medicine, cybersecurity, finance, and education. While these applications of machine learning algorithms have been proven beneficial in various fields, many shortcomings have also been highlighted, such as the lack of benchmark datasets, the inability to learn from small datasets, the cost of architecture, adversarial attacks, and imbalanced datasets. On the other hand, new and emerging algorithms, such as deep learning, one-shot learning, continuous learning, and generative adversarial networks, have successfully solved various tasks in these fields. Therefore, applying these new methods to life-critical missions is crucial, as is measuring these less-traditional algorithms' success when used in these fields

    Task-based Example Miner for Intelligent Tutoring Systems

    Get PDF
    Intelligent tutoring systems (ITS) aim to provide customized resources or feedback on a subject (commonly known as domain in ITS) to students in real-time, emulating the behavior of an actual teacher in a classroom. This thesis designs an ITS based on an instructional strategy called example-based learning (EBL), that focuses primarily on students devoting their time and cognitive capacity to studying worked-out examples so that they can enhance their learning and apply it to similar graded problems or tasks. A task is a graded problem or question that an ITS assigns to students (e.g. task T1 in C programming domain defined as “Write an assignment instruction in C that adds 2 integers”). A worked-out example refers to a complete solution of a problem or question in the domain. Existing ITS systems such as NavEx and PADS, that use EBL to teach their domain suffer from several limitations such as (1) methods used to extract knowledge from given tasks and worked-out examples require highly trained experts and are not easily applicable or extendable to other problem domains (e.g. Math), either due to use of manual knowledge extraction methods (such as Item Objective Consistency (IOC)) or highly complex automated methods (such as syntax tree generation) (2) recommended worked-out examples are not customized for assigned tasks and therefore are ineffective in improving student success rate

    Graphs behind data: A network-based approach to model different scenarios

    Get PDF
    openAl giorno d’oggi, i contesti che possono beneficiare di tecniche di estrazione della conoscenza a partire dai dati grezzi sono aumentati drasticamente. Di conseguenza, la definizione di modelli capaci di rappresentare e gestire dati altamente eterogenei Ăš un argomento di ricerca molto dibattuto in letteratura. In questa tesi, proponiamo una soluzione per affrontare tale problema. In particolare, riteniamo che la teoria dei grafi, e piĂč nello specifico le reti complesse, insieme ai suoi concetti ed approcci, possano rappresentare una valida soluzione. Infatti, noi crediamo che le reti complesse possano costituire un modello unico ed unificante per rappresentare e gestire dati altamente eterogenei. Sulla base di questa premessa, mostriamo come gli stessi concetti ed approcci abbiano la potenzialitĂ  di affrontare con successo molti problemi aperti in diversi contesti. ​Nowadays, the amount and variety of scenarios that can benefit from techniques for extracting and managing knowledge from raw data have dramatically increased. As a result, the search for models capable of ensuring the representation and management of highly heterogeneous data is a hot topic in the data science literature. In this thesis, we aim to propose a solution to address this issue. In particular, we believe that graphs, and more specifically complex networks, as well as the concepts and approaches associated with them, can represent a solution to the problem mentioned above. In fact, we believe that they can be a unique and unifying model to uniformly represent and handle extremely heterogeneous data. Based on this premise, we show how the same concepts and/or approach has the potential to address different open issues in different contexts. ​INGEGNERIA DELL'INFORMAZIONEopenVirgili, Luc
    corecore