5 research outputs found

    Stress-Testing Remote Model Querying APIs for Relational and Graph-Based Stores

    Get PDF
    Recent research in scalable model-driven engineering now allows very large models to be stored and queried. Due to their size, rather than transferring such models over the network in their entirety, it is typically more efficient to access them remotely using networked services (e.g. model repositories, model indexes). Little attention has been paid so far to the nature of these services, and whether they remain responsive with an increasing number of concurrent clients. This paper extends a previous empirical study on the impact of certain key decisions on the scalability of concurrent model queries on two domains, using an Eclipse Connected Data Objects model repository, four configurations of the Hawk model index and a Neo4j-based configuration of the NeoEMF model store. The study evaluates the impact of the network protocol, the API design, the caching layer, the query language and the type of database, and analyses the reasons for their varying levels of performance. The design of the API was shown to make a bigger difference compared to the network protocol (HTTP/TCP) used. Where available, the query-specific indexed and derived attributes in Hawk outperformed the comprehensive generic caching in CDO. Finally, the results illustrate the still ongoing evolution of graph databases: two tools using different versions of the same backend had very different performance, with one slower than CDO and the other faster than it

    Hawk solutions to the TTC 2018 Social Media Case

    Get PDF
    The TTC 2018 Social Media Case required answering queries about social networks, where people write posts, comment on them, and friend or unfriend each other. NoSQL databases have been popular in the analysis of large social networks, and the Hawk heterogeneous model indexer can turn the models in the case to Neo4j NoSQL databases. This paper presents three solutions that were developed on top of each other, reducing on each step the amount of work required to update the results of the query

    Non-human Modelers:Challenges and Roadmap for Reusable Self-explanation

    Get PDF
    Increasingly, software acts as a “non-human modeler” (NHM), managing a model according to high-level goals rather than a predefined script. To foster adoption, we argue that we should treat these NHMs as members of the development team. In our GrandMDE talk, we discussed the importance of three areas: effective communication (self-explanation and problem-oriented configuration), selection, and process integration. In this extended version of the talk, we will expand on the self-explanation area, describing its background in more depth and outlining a research roadmap based on a basic case study

    Reflecting on the past and the present with temporal graph-based models

    Get PDF
    Self-adaptive systems (SAS) need to reflect on the current environment conditions, their past and current behaviour to support decision making. Decisions may have different effects depending on the context. On the one hand, some adaptations may have run into difficulties. On the other hand, users or operators may want to know why the system evolved in a certain direction. Users may just want to know why the system is showing a given behaviour or has made a decision as the behaviour may be surprising or not expected. We argue that answering emerging questions related to situations like these requires storing execution trace models in a way that allows for travelling back and forth in time, qualifying the decision making against available evidence. In this paper, we propose temporal graph databases as a useful representation for trace models to support self-explanation, interactive diagnosis or forensic analysis. We define a generic meta-model for structuring execution traces of SAS, and show how a sequence of traces can be turned into a temporal graph model. We present a first version of a query language for these temporal graphs through a case study, and outline the potential applications for forensic analysis (after the system has finished in a potentially abnormal way), self-explanation, and interactive diagnosis at runtime

    Temporal Models For History-Aware Explainability In Self-Adaptive Systems

    Get PDF
    The complexity of real-world problems requires modern software systems to be able to autonomously adapt and modify their behaviour at runtime to deal with unforeseen internal and external fluctuations and contexts. Consequently, these self-adaptive systems (SAS) can show unexpected and surprising behaviours which stakeholders may not understand or agree with. This may be exacerbated due to the ubiquity and complexity of Artificial Intelligence (AI) techniques which are often considered “black boxes” and are increasingly used by SAS. This thesis explores how synergies between model-driven engineering and runtime monitoring help to enable explanations based on SAS’ historical behaviour with the objective of promoting transparency and understandability in these types of systems. Specifically, this PhD work has studied how the use of runtime models extended with long-term memory can provide the abstraction, analysis and reasoning capabilities needed to support explanations when using AI-based SAS. For this purpose, this work argues that a system should i) offer access and retrieval of historical data about past behaviour, ii) track over time the reasons for its decision making, and iii) be able to convey this knowledge to different stakeholders as part of explanations for justifying its behaviour. Runtime models stored in Temporal Graph Databases, which result in Temporal Models (TMs), are proposed for tracking the decision-making history of SAS to support explanations. The approach enables explainability for interactive diagnosis (i.e. during execution) and forensic analysis (i.e. after the fact) based on the trajectory of the SAS execution. Furthermore, in cases where the resources are limited (e.g., storage capacity or time to response), the proposed architecture also integrates the runtime monitoring technique, complex event processing (CEP). CEP allows detecting matches to event patterns that need to be stored instead of keeping the entire history. The proposed architecture helps developers in gaining insights into SAS while they work on validating and improving their systems
    corecore