Electronic Communications of the EASST (European Association of Software Science and Technology)
Not a member yet
    887 research outputs found

    SOS-Supported Graph Transformation

    Get PDF
    In this paper, we propose a simplicity-oriented approach for model-tomodel transformations of graphical languages. Key to simplicity is decomposing the rule system into two rule sub-systems that separate purpose-specific aspects (transformation and computation), and specifying these rule systems as a graphical language. For the transformational aspect, we use a compiler-like generation approach, while taking Plotkin’s Structural Operational Semantics (SOS) as inspiration for the computational aspect. We define these rule systems as inference rules for pattern-based transformations of typed, hierarchical graphs. Using typed graphs allows patterns to easily distinguish between the elements of the source graph. The resulting rule system (named SOS-Supported Graph Transformation, or SOS-GT) supports a well-structured and intuitive specification of complex model-to-model transformations adequate for a variety of use cases. We illustrate these rules with an example of transforming the WebStory language (WSL, an educational tool) to a Kripke Transition System (KTS) suitable for model checking, and give an overview over more applications in the end of the paper

    Software Engineering meets Artificial Intelligence

    Get PDF
    With the increasing use of AI in classic software systems, two worlds are coming closer and closer to each other that were previously rather alien to each other, namely the established discipline of software engineering and the world of AI. On the one hand, there are the data scientists, who try to extract as many insights as possible from the data using various tools, a lot of freedom and creativity. On the other hand, the software engineers, who have learned over years and decades to deliver the highest quality software possible and to manage release statuses. When developing software systems that include AI components, these worlds collide. This article shows which aspects come into play here, which problems can occur, and how solutions to these problems might look like. Beyond that, software engineering itself can benefit from the use of AI methods. Thus, we will also look at the emerging research area AI for software engineering

    Preface

    Get PDF

    Validating Behavioral Requirements, Conditions, and Rules of Autonomous Systems with Scenario-Based Testing

    Get PDF
    Assuring the safety of autonomous vehicles is more and more approached by using scenario-based testing. Relevant driving situations are utilized here to fuel the argument that an autonomous vehicle behaves correctly. Many recent works focus on the specification, variation, generation, and execution of individual scenarios. However, it is still an open question if operational design domains, which describe the environmental conditions under which the system under test has to function, can be assessed with scenario-based testing. In this paper, we present open challenges and resulting research questions in the field of assuring the safety of autonomous vehicles. We have developed a toolchain that enables us to conduct scenario-based testing experiments based on scenario classification with temporal logic and driving data obtained from the CARLA simulator. We discuss the toolchain and present first results using analysis metrics like class coverage or distribution

    Binary Decision Diagrams and Composite Classifiers for Analysis of Imbalanced Medical Datasets

    Get PDF
    oai:journal.ub.tu-berlin.de:article/1227Imbalanced datasets pose significant challenges in the development of accurate and robust classification models. In this research, we propose an approach that uses Binary Decision Diagrams (BDDs) to conduct pre-checks and suggest appropriate resampling techniques for imbalanced medical datasets as the application domain where we apply this technology is medical data collections. BDDs provide an efficient representation of the decision boundaries, enabling interpretability and providing valuable insights. In our experiments, we evaluate the proposed approach on various real-world imbalanced medical datasets, including Cerebralstroke dataset, Diabetes dataset and Sepsis dataset. Overall, our research contributes to the field of imbalanced medical dataset analysis by presenting a novel approach that uses BDDs and composite classifiers in a low-code/no-code environment. The results highlight the potential for our method to assist healthcare professionals in making informed decisions and improving patient outcomes in imbalanced medical datasets

    Towards Code-centric Code Generators

    Get PDF
    This paper presents a novel approach to code generation. While common code generator approaches lack in support for code evolution and maintenance such as refactoring, the presented Code-centric generator (CCG) approach attempts to overcome these issues. Instead of mixing generator abstractions and actual code snippets, CCG provides a layer between the generator and prototypical target code. The new layer provides the ability to map code generator operations directly onto code AST subtrees, and generates the resulting generators based on these mappings and the prototypical target implementation

    Tool Support for System-Theoretic Process Analysis

    Get PDF
    Hazard analysis techniques such as System-Theoretic Process Analysis (STPA) are used to guarantee the safety of safety-critical systems. Our goal is to improve the tool support for STPA. The preliminary result is the PASTA Visual Studio Code (VSCode) Extension that provides verification checks and diagrams. PASTA uses elkjs to layout the diagrams and Sprotty to draw them. We evaluate PASTA by recreating the ROLFER analysis. In the future we plan to further evaluate whether PASTA improves upon existing tools and to add more features such as reevaluation suggestions, model checking, and support for other risk analysis techniques

    Lazy Merging: From a Potential of Universes to a Universe of Potentials

    Get PDF
    Current collaboration workflows force participants to resolve conflicts eagerly, despite having insufficient knowledge and not being aware of their collaborators’ intentions. This is a major reason for bad decisions because it can disregard opinions within the team and cover up disagreements. In our concept of lazy merging we propose to aggregate conflicts as variant potentials. Variant potentials preserve concurrent changes and present the different options to the participants. They can be further merged and edited without restrictions and behave robustly even in complex collaboration scenarios. We use lattice theory to prove important properties and show the correctness and robustness of the collaboration protocol. With lazy merging, conflicts can be resolved deliberately, when all opinions within the team were explored and discussed. This facilitates alignment among team members and prepares them to arrive at the best possible decision that considers the knowledge of the whole team

    Introduction to Symbolic Execution of Neural Networks - Towards Faithful and Explainable Surrogate Models

    Get PDF
    Neural Networks are inherently opaque machine learning models and suffer from uncontrollable errors that are often hard to find during testing. Yet no other model can attain their performance in current ML tasks. Thus, methods are needed to explain, understand, or even gain trust in neural networks and their Decisions. However, many existing explainability methods are abstractions of the true model, thus not providing reliable guarantees. For safety critical tasks, both rigorous explanations and state-of-the-art predictive performance are required. For neural networks with piece-wise linear activation functions (like ReLU), it is possible to distill the network into a surrogate model that is both interpretable and faithful using decompositional rule-extraction. We present a simple-to-follow introduction to this topic building on a well-known technique from traditional program verification: symbolic execution. This is done in two steps: First, we reformulate a neural network into an intermediate imperative program that consist of only if-then-else branches, assignments, and linear arithmetic. Then, we apply symbolic execution to this program to achieve the decomposition. Finally, we reintroduce a decision-tree like data structure called Typed Affine Decision Structure (TADS) that is specifically designed to efficiently represent the symbolic execution of neural networks. Further, we extend TADS to cover partial symbolic execution settings, which mitigates the path explosion problem that is a common bottleneck in practice. The paper contains many examples and illustrations generated with our tool

    Low-Code/No-Code Artificial Intelligence Platforms for the Health Informatics Domain

    Get PDF
    In the contemporary health informatics space, Artificial Intelligence (AI) has become a necessity for the extraction of actionable knowledge in a timely manner. Low-code/No-Code (LCNC) AI Platforms enable domain experts to leverage the value that AI has to offer by lowering the technical skills overhead. We develop domain-specific, service-orientated platforms in the context of two subdomains of health informatics. We address in this work the core principles and the architectures of these platforms whose functionality we are constantly extending. Our work conforms to best practices with respect to the integration and interoperability of external services and provides process orchestration in a LCNC modeldriven fashion. We chose the CINCO product DIME and a bespoke tool developed in CINCO Cloud to serve as the underlying infrastructure for our LCNC platforms which address the requirements from our two application domains; public health and biomedical research. In the context of public health, an environment for building AI driven web applications for the automated evaluation of Web-based Health Information (WBHI). With respect to biomedical research, an AI driven workflow environment for the computational analysis of highly-plexed tissue images. We extended both underlying application stacks to support the various AI service functionality needed to address the requirements of the two application domains. The two case studies presented outline the methodology of developing these platforms through co-design with experts in the respective domains. Moving forward we anticipate we will increasingly re-use components which will reduce the development overhead for extending our existing platforms or developing new applications in similar domains

    858

    full texts

    887

    metadata records
    Updated in last 30 days.
    Electronic Communications of the EASST (European Association of Software Science and Technology) is based in Germany
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇