13 research outputs found

    Model-Driven Engineering for Artificial Intelligence - A Systematic Literature Review

    Get PDF
    Objective: This study aims to investigate the existing body of knowledge in the field of Model-Driven Engineering MDE in support of AI (MDE4AI) to sharpen future research further and define the current state of the art. Method: We conducted a Systemic Literature Review (SLR), collecting papers from five major databases resulting in 703 candidate studies, eventually retaining 15 primary studies. Each primary study will be evaluated and discussed with respect to the adoption of (1) MDE principles and practices and (2) the phases of AI development support aligned with the stages of the CRISP-DM methodology. Results: The study's findings show that the pillar concepts of MDE (metamodel, concrete syntax and model transformation), are leveraged to define domain-specific languages (DSL) explicitly addressing AI concerns. Different MDE technologies are used, leveraging different language workbenches. The most prominent AI-related concerns are training and modeling of the AI algorithm, while minor emphasis is given to the time-consuming preparation of the data sets. Early project phases that support interdisciplinary communication of requirements, such as the CRISP-DM \textit{Business Understanding} phase, are rarely reflected. Conclusion: The study found that the use of MDE for AI is still in its early stages, and there is no single tool or method that is widely used. Additionally, current approaches tend to focus on specific stages of development rather than providing support for the entire development process. As a result, the study suggests several research directions to further improve the use of MDE for AI and to guide future research in this area

    No Unification Variable Left Behind: Fully Grounding Type Inference for the HDM System

    Get PDF
    The Hindley-Damas-Milner (HDM) system provides polymorphism, a key feature of functional programming languages such as Haskell and OCaml. It does so through a type inference algorithm, whose soundness and completeness have been well-studied and proven both manually (on paper) and mechanically (in a proof assistant). Earlier research has focused on the problem of inferring the type of a top-level expression. Yet, in practice, we also may wish to infer the type of subexpressions, either for the sake of elaboration into an explicitly-typed target language, or for reporting those types back to the programmer. One key difference between these two problems is the treatment of underconstrained types: in the former, unification variables that do not affect the overall type need not be instantiated. However, in the latter, instantiating all unification variables is essential, because unification variables are internal to the algorithm and should not leak into the output. We present an algorithm for the HDM system that explicitly tracks the scope of all unification variables. In addition to solving the subexpression type reconstruction problem described above, it can be used as a basis for elaboration algorithms, including those that implement elaboration-based features such as type classes. The algorithm implements input and output contexts, as well as the novel concept of full contexts, which significantly simplifies the state-passing of traditional algorithms. The algorithm has been formalised and proven sound and complete using the Coq proof assistant

    Learn-ciam: a model-driven approach for the development of collaborative learning tools

    Get PDF
    This paper introduces Learn-CIAM, a new model-based methodological approach for the design of flows and for the semi-automatic generation of tools in order to support collaborative learning tasks. The main objective of this work is to help professors by establishing a series of steps for the specification of their learning courses and the obtaining of collaborative tools to support certain learning activities (in particular, for in-group editing, searching and modeling). This paper presents a complete methodological framework, how it is supported conceptually and technologically, and an application example. So to guarantee the validity of the proposal, we also present some validation processes with potential designers and users from different profiles such as Education and Computer Science. The results seem to demonstrate a positive reception and acceptance, concluding that its application would facilitate the design of learning courses and the generation of collaborative learning tools for professionals of both profiles

    Derivation of Change Sequences from State-Based File Differences for Delta-Based Model Consistency

    Get PDF
    In der sichtenbasierten Software-Entwicklung ist es möglich, dass mehrere Sichten das gleiche Konzept abbilden, wodurch Sichten redundante oder abhängige Informationen darstellen können. Es ist essenziell, diese individuellen Sichten synchron zu halten, um Inkonsistenzen im System zu vermeiden. In Ansätzen mit einem Single Underlying Model (SUM) werden Inkonsistenzen vermieden, indem das SUM als zentrale und einzige Informationsquelle genutzt wird, von welcher Sichten projiziert werden. Um Sichten mit dem SUM zu synchronisieren, wird in den meisten Fällen eine deltabasierte Konsistenzhaltung verwendet. Diese nutzt feingranulare Änderungssequenzen, welche von den einzelnen Sichten bereitgestellt werden müssen, um das SUM inkrementell zu aktualisieren. In realen Anwendungsfällen ist die Funktionalität zur Bereitstellung dieser Änderungssequenzen jedoch selten verfügbar. Stattdessen werden nur zustandsbasierte Änderungen persistiert. Es ist insofern wünschenswert Sichten, welche nur zustandsbasierte Änderungen bereitstellen, in deltabasierter Konsistenzhaltung zu unterstützen. Dies kann erreicht werden, indem die feingranularen Änderungssequenzen von den zustandsbasierten Änderungen abgeleitet werden. In dieser Arbeit wird die Qualität von abgeleiteten Änderungssequenzen im Kontext von Modellkonsistenzhaltung evaluiert. Um eine solche Sequenz abzuleiten, müssen übereinstimmende Elemente aus den verglichenen Modellen identifiziert und deren Unterschiede bestimmt werden. Um übereinstimmenden Elemente zu identifizieren, nutzen wir zwei Strategien. Bei der einen Strategie werden übereinstimmende Elemente anhand ihres eindeutigen Bezeichners erkannt. Bei der anderen Strategie wird eine Ähnlichkeitsmetrik basierend auf den Eigenschaften der Elemente genutzt. Als Evaluationsgrundlage werden verschiedene Testszenarien erstellt. Für jeden Test wird eine initiale und eine geänderte Version von sowohl einem UML-Klassendiagramm als auch Java-Code bereitgestellt. Wir nutzen die verschiedenen Strategien, um Änderungssequenzen basierend auf den zustandsbasierten Änderungen der UML-Sicht abzuleiten, geben diese an das SUM weiter und untersuchen die Ergebnisse in beiden Domänen. Die Ergebnisse zeigen, dass die Strategie, welche eindeutige Bezeichner nutzt, in fast allen betrachteten Fällen (97 %) die korrekte Änderungssequenz liefert. Bei der Nutzung der ähnlichkeitsbasierten Strategie können wir zwei wiederkehrende Fehlermuster identifizieren. Bezüglich dieser Probleme stellen wir eine erweiterte ähnlichkeitsbasierte Strategie vor, welche in der Lage ist, die Auftrittshäufigkeit der Fehlermuster zu reduzieren ohne die Ausführungsgeschwindigkeit signifikant zu beeinflussen

    The upper bound for the length of the shortest homing sequences

    Get PDF
    Homing sequences are special input sequences that are used by various techniques of finite state machine based testing. Using a shorter homing sequence is typically preferred since it would yield a shorter test sequence. Finding a shortest homing sequence is known to be an NP–hard problem. The upper bound of shortest homing sequences is also a problem studied in the literature. A tight upper bound for the length of shortest homing sequence for a finite state machine with n states is known to be n(n−1)/2 . However, the known examples of finite state machines hitting to this upper bound also use n−1 input symbols, i.e. the size of the input alphabet also grows with the number of states. Is this upper bound reachable for a finite state machine with a constant number of inputs? In this work, we use an experimental analysis and we answer this question negatively. By exhaustively enumerating all finite state machines with two input symbols and two output symbols, we experimentally compute the upper bound for the length of the shortest homing sequence for finite state machines with 10 or less states. In order to make this computation feasible in practice, we apply several techniques to eliminate from our search those finite state machines which would not affect the result of the computatio

    Pattern-based refactoring in model-driven engineering

    Full text link
    L’ingénierie dirigée par les modèles (IDM) est un paradigme du génie logiciel qui utilise les modèles comme concepts de premier ordre à partir desquels la validation, le code, les tests et la documentation sont dérivés. Ce paradigme met en jeu divers artefacts tels que les modèles, les méta-modèles ou les programmes de transformation des modèles. Dans un contexte industriel, ces artefacts sont de plus en plus complexes. En particulier, leur maintenance demande beaucoup de temps et de ressources. Afin de réduire la complexité des artefacts et le coût de leur maintenance, de nombreux chercheurs se sont intéressés au refactoring de ces artefacts pour améliorer leur qualité. Dans cette thèse, nous proposons d’étudier le refactoring dans l’IDM dans sa globalité, par son application à ces différents artefacts. Dans un premier temps, nous utilisons des patrons de conception spécifiques, comme une connaissance a priori, appliqués aux transformations de modèles comme un véhicule pour le refactoring. Nous procédons d’abord par une phase de détection des patrons de conception avec différentes formes et différents niveaux de complétude. Les occurrences détectées forment ainsi des opportunités de refactoring qui seront exploitées pour aboutir à des formes plus souhaitables et/ou plus complètes de ces patrons de conceptions. Dans le cas d’absence de connaissance a priori, comme les patrons de conception, nous proposons une approche basée sur la programmation génétique, pour apprendre des règles de transformations, capables de détecter des opportunités de refactoring et de les corriger. Comme alternative à la connaissance disponible a priori, l’approche utilise des exemples de paires d’artefacts d’avant et d’après le refactoring, pour ainsi apprendre les règles de refactoring. Nous illustrons cette approche sur le refactoring de modèles.Model-Driven Engineering (MDE) is a software engineering paradigm that uses models as first-class concepts from which validation, code, testing, and documentation are derived. This paradigm involves various artifacts such as models, meta-models, or model transformation programs. In an industrial context, these artifacts are increasingly complex. In particular, their maintenance is time and resources consuming. In order to reduce the complexity of artifacts and the cost of their maintenance, many researchers have been interested in refactoring these artifacts to improve their quality. In this thesis, we propose to study refactoring in MDE holistically, by its application to these different artifacts. First, we use specific design patterns, as an example of prior knowledge, applied to model transformations to enable refactoring. We first proceed with a detecting phase of design patterns, with different forms and levels of completeness. The detected occurrences thus form refactoring opportunities that will be exploited to implement more desirable and/or more complete forms of these design patterns. In the absence of prior knowledge, such as design patterns, we propose an approach based on genetic programming, to learn transformation rules, capable of detecting refactoring opportunities and correcting them. As an alternative to prior knowledge, our approach uses examples of pairs of artifacts before and after refactoring, in order to learn refactoring rules. We illustrate this approach on model refactoring

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC
    corecore