129 research outputs found

    Designing Traceability into Big Data Systems

    Full text link
    Providing an appropriate level of accessibility and traceability to data or process elements (so-called Items) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems need to be designed from the outset to support usage of such Items across the spectrum of business use rather than from any specific application view. The design philosophy advocated in this paper is to drive the design process using a so-called description-driven approach which enriches models with meta-data and description and focuses the design process on Item re-use, thereby promoting traceability. Details are given of the description-driven design of big data systems at CERN, in health informatics and in business process management. Evidence is presented that the approach leads to design simplicity and consequent ease of management thanks to loose typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore July 2015. arXiv admin note: text overlap with arXiv:1402.5764, arXiv:1402.575

    Reflecting on the past and the present with temporal graph-based models

    Get PDF
    Self-adaptive systems (SAS) need to reflect on the current environment conditions, their past and current behaviour to support decision making. Decisions may have different effects depending on the context. On the one hand, some adaptations may have run into difficulties. On the other hand, users or operators may want to know why the system evolved in a certain direction. Users may just want to know why the system is showing a given behaviour or has made a decision as the behaviour may be surprising or not expected. We argue that answering emerging questions related to situations like these requires storing execution trace models in a way that allows for travelling back and forth in time, qualifying the decision making against available evidence. In this paper, we propose temporal graph databases as a useful representation for trace models to support self-explanation, interactive diagnosis or forensic analysis. We define a generic meta-model for structuring execution traces of SAS, and show how a sequence of traces can be turned into a temporal graph model. We present a first version of a query language for these temporal graphs through a case study, and outline the potential applications for forensic analysis (after the system has finished in a potentially abnormal way), self-explanation, and interactive diagnosis at runtime

    Configuration management for models : generic methods for model comparison and model co-evolution

    Get PDF
    It is an undeniable fact that software plays an important role in our lives. We use the software to play our music, to check our e-mail, or even to help us drive our car. Thus, the quality of software directly influences the quality of our lives. However, the traditional Software Engineering paradigm is not able to cope with the increasing demands in quantity and quality of produced software. Thus, a new paradigm of Model Driven Software Engineering (MDSE) is quickly gaining ground. MDSE promises to solve some of the problems of traditional Software Engineering (SE) by raising the level of abstraction. Thus, MDSE proposes the use of models and model transformations, instead of textual program files used in traditional SE, as means of producing software. The models are usually graph-based, and are built by using graphical notations – i.e. the models are represented diagrammatically. The advantages of using graphical models over text files are numerous, for example it is usually easier to deduce the relations between different model elements in their diagrammatic form, thus reducing the possibility of defects during the production of the software. Furthermore, formal model transformations can be used to produce different kinds of artifacts from models in all stages of software production. For example, artifacts that can be used as input for model checkers or simulation tools can be produced. This enables the checking or simulation of software products in the early phases of development, which further reduces the probability of defects in the final software product. However, methods and techniques to support MDSE are still not mature enough. In particular methods and techniques for model configuration management (MCM) are still in development, and no generic MCM system exists. In this thesis, I describe my research which was focused on developing methods and techniques to support generic model configuration management. In particular, during my research, I focused on developing methods and techniques for supporting model evolution and model co-evolution. Described methods and techniques are generic and are suitable for a state-based approach to model configuration management. In order to support the model evolution, I developed methods for the representation, calculation, and visualization of state-based model differences. Unlike in previously published research, where these three aspects of model differences are dealt with in separation, in my research all these three aspects are integrated. Thus, the result of model differences calculation algorithm is in the format which is described by my research on model differences representation. The same representation format of model differences is used as a basis of my approach to differences visualization. It is important to notice that the developed representation format for model differences is metamodel independent, and thus is generic, i.e., it can be used to represent differences between all graph-based models. Model co-evolution is a term that describes the problem of adapting models when their metamodels evolve. My solution to this problem has three steps. In the first step a special metamodel MMfMM is introduced. Unlike in traditional approaches, where metamodels are represented as instances of a metametamodel, in my approach the metamodels are represented by models which are instances of an MMfMM. In the second step, since metamodels are represented by models, previously defined methods and techniques for model evolution are reused to represent and calculate the metamodel differences. In the final step I define an algorithm that uses the calculated metamodel differences to adapt models conforming to the evolved metamodel. In order to validate my approaches to model evolution and model co-evolution, I have developed a tool for comparing models and visualizing resulting differences, and a tool for model co-evolution. Moreover, I have developed a method to compare tools for model comparison, and using this method I have conducted a series of experiments in which I compared the tool I developed to an industrial tool called EMFCompare. The results of these experiments are also presented in the thesis. Furthermore, in order to validate my tool and approach to model co-evolution, I have also specified and conducted several experiments. The results of these experiments are also presented in the thesis
    • …
    corecore