6,916 research outputs found

    A Model-Based Approach to Impact Analysis Using Model Differencing

    Get PDF
    Impact analysis is concerned with the identification of consequences of changes and is therefore an important activity for software evolution. In modelbased software development, models are core artifacts, which are often used to generate essential parts of a software system. Changes to a model can thus substantially affect different artifacts of a software system. In this paper, we propose a modelbased approach to impact analysis, in which explicit impact rules can be specified in a domain specific language (DSL). These impact rules define consequences of designated UML class diagram changes on software artifacts and the need of dependent activities such as data evolution. The UML class diagram changes are identified automatically using model differencing. The advantage of using explicit impact rules is that they enable the formalization of knowledge about a product. By explicitly defining this knowledge, it is possible to create a checklist with hints about development steps that are (potentially) necessary to manage the evolution. To validate the feasibility of our approach, we provide results of a case study.Comment: 16 pages, 5 figures, In: Proceedings of the 8th International Workshop on Software Quality and Maintainability (SQM), ECEASST Journal, vol. 65 201

    Model Matching Challenge: Benchmarks for Ecore and BPMN Diagrams

    Get PDF
    In the last couple of years, Model Driven Engineering (MDE) gained a prominent role in the context of software engineering. In the MDE paradigm, models are considered first level artifacts which are iteratively developed by teams of programmers over a period of time. Because of this, dedicated tools for versioning and management of models are needed. A central functionality within this group of tools is model comparison and differencing. In two disjunct research projects, we identified a group of general matching problems where state-of-the-art comparison algorithms delivered low quality results. In this article, we will present five edit operations which are the cause for these low quality results. The reasons why the algorithms fail, as well as possible solutions, are also discussed. These examples can be used as benchmarks by model developers to assess the quality and applicability of a model comparison tool for a given model type.Comment: 7 pages, 7 figure

    Methods for Interpreting and Understanding Deep Neural Networks

    Full text link
    This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. It introduces some recently proposed techniques of interpretation, along with theory, tricks and recommendations, to make most efficient use of these techniques on real data. It also discusses a number of practical applications.Comment: 14 pages, 10 figure

    Plädoyer für den transzendentalen Rekurs\ud in der religiösen Epistemologie

    Get PDF
    Außer Frage steht, dass, wer in theologischem Zusammenhang\ud von Transzendentalität spricht, zuerst an Karl\ud Rahner denkt. Gilt doch als common sense, dass die Idee\ud einer Transzendentaltheologie auf ihn zurückgeht. Dabei\ud erhebt er selbst diesbezüglich keinerlei Anspruch auf\ud Originalität, sofern er selbst überzeugt ist, dass er in\ud Gestalt einer für ihn in ihrer Unbekümmertheit typischen\ud Aufnahme einer philosophischen Terminologie nur einem\ud Sachverhalt Ausdruck verleiht, der "[...] schon immer in der\ud Theologie gegeben war [und jetzt nur; K.M.] reflex erfaßt\ud wird und seinen eigenen Namen erhielt" (Rahner 2002,\ud 1332-1337). Gleichwohl ist eine historische Reminiszenz in\ud diesem Fall aufschlussreich: Kein Geringerer als Kant\ud spricht von "Transc. Theologie" (Kant: Opus Postumum,\ud AA 22, 63) als der Kulmination von Transzendentalphilosophie

    Learning with Algebraic Invariances, and the Invariant Kernel Trick

    Get PDF
    When solving data analysis problems it is important to integrate prior knowledge and/or structural invariances. This paper contributes by a novel framework for incorporating algebraic invariance structure into kernels. In particular, we show that algebraic properties such as sign symmetries in data, phase independence, scaling etc. can be included easily by essentially performing the kernel trick twice. We demonstrate the usefulness of our theory in simulations on selected applications such as sign-invariant spectral clustering and underdetermined ICA
    corecore