1,568 research outputs found

    Parasol: Efficient Parallel Synthesis of Large Model Spaces

    Get PDF
    Formal analysis is an invaluable tool for software engineers, yet state-of-the-art formal analysis techniques suffer from well-known limitations in terms of scalability. In particular, some software design domains—such as tradeoff analysis and security analysis—require systematic exploration of potentially huge model spaces, which further exacerbates the problem. Despite this present and urgent challenge, few techniques exist to support the systematic exploration of large model spaces. This paper introduces Parasol, an approach and accompanying tool suite, to improve the scalability of large-scale formal model space exploration. Parasol presents a novel parallel model space synthesis approach, backed with unsupervised learning to automatically derive domain knowledge, guiding a balanced partitioning of the model space. This allows Parasol to synthesize the models in each partition in parallel, significantly reducing synthesis time and making large-scale systematic model space exploration for real-world systems more tractable. Our empirical results corroborate that Parasol substantially reduces (by 460% on average) the time required for model space synthesis, compared to state-of-the-art model space synthesis techniques relying on both incremental and parallel constraint solving technologies as well as competing, non-learning-based partitioning methods

    EvoAlloy: An Evolutionary Approach For Analyzing Alloy Specifications

    Get PDF
    Using mathematical notations and logical reasoning, formal methods precisely define a program’s specifications, from which we can instantiate valid instances of a system. With these techniques, we can perform a variety of analysis tasks to verify system dependability and rigorously prove the correctness of system properties. While there exist well-designed automated verification tools including ones considered lightweight, they still lack a strong adoption in practice. The essence of the problem is that when applied to large real world applications, they are not scalable and applicable due to the expense of thorough verification process. In this thesis, I present a new approach and demonstrate how to relax the completeness guarantee without much loss, since soundness is maintained. I have extended a widely applied lightweight analysis, Alloy, with a genetic algorithm. Our new tool, EvoAlloy, works at the level of finite relations generated by Kodkod and evolves the chromosomes based on the feedback including failed constraints. Through a feasibility study, I prove that my approach can successfully find solutions to a set of specifications beyond the scope where traditional Alloy Analyzer fails. While EvoAlloy solves small size problems with longer time, its scalability provided by genetic extension shows its potential to handle larger specifications. My future vision is that when specifications are small I can maintain both soundness and completeness, but when this fails, EvoAlloy can switch to its genetic algorithm. Adviser: Hamid Bagher

    Reducing Run-Time Adaptation Space via Analysis of Possible Utility Bounds

    Get PDF
    Self-adaptive systems often employ dynamic programming or similar techniques to select optimal adaptations at run-time. These techniques suffer from the “curse of dimensionality , increasing the cost of run-time adaptation decisions. We propose a novel approach that improves upon the state-of-the-art proactive self-adaptation techniques to reduce the number of possible adaptations that need be considered for each run-time adaptation decision. The approach, realized in a tool called Thallium, employs a combination of automated formal modeling techniques to (i) analyze a structural model of the system showing which configurations are reachable from other configurations and (ii) compute the utility that can be generated by the optimal adaptation over a bounded horizon in both the best- and worst-case scenarios. It then constructs triangular possibility values using those optimized bounds to automatically compare adjacent adaptations for each configuration, keeping only the alternatives with the best range of potential results. The experimental results corroborate Thallium’s ability to significantly reduce the number of states that need to be considered with each adaptation decision, freeing up vital resources at run-time

    Platinum: Reusing Constraint Solutions in Bounded Analysis of Relational Logic

    Get PDF
    Alloy is a light weight specification language based on relational logic, with an analysis engine that relies on SAT solvers to automate bounded verifica- tion of specifications. In spite of its strengths, the reliance of the Alloy Analyzer on computationally heavy solvers means that it can take a significant amount of time to verify software properties, even within limited bounds. This challenge is exacerbated by the ever-evolving nature of complex software systems. This paper presents PLATINUM, a technique for efficient analysis of evolving Alloy specifications, that recognizes opportunities for constraint reduction and reuse of previously identified constraint solutions. The insight behind PLATINUM is that formula constraints recur often during the analysis of a single specification and across its revisions, and constraint solutions can be reused over sequences of anal- yses performed on evolving specifications. Our empirical results show that PLAT- INUM substantially reduces (by 66.4% on average) the analysis time required on specifications extracted from real-world software systems

    A knowledge-driven AutoML architecture

    Full text link
    This paper proposes a knowledge-driven AutoML architecture for pipeline and deep feature synthesis. The main goal is to render the AutoML process explainable and to leverage domain knowledge in the synthesis of pipelines and features. The architecture explores several novel ideas: first, the construction of pipelines and deep features is approached in an unified way. Next, synthesis is driven by a shared knowledge system, interactively queried as to what pipeline operations to use or features to compute. Lastly, the synthesis processes takes decisions at runtime using partial solutions and results of their application on data. Two experiments are conducted to demonstrate the functionality of a na\"{\i}ve implementation of the proposed architecture and to discuss its advantages, trade-offs as well as future potential for AutoML

    Qualitative Metrics for Development Technologies Evaluation in Database-driven Information Systems

    Get PDF
    Database-driven information systems are nowadays widespread in various application domains. The technologies employed to manage the database itself are generally built on the same data model – the relational model – and their evolution is mainly related to their physical performances, without significant changes in their conceptual foundation. By contrast, the user interface development technologies are changing at an increasing pace, which cause important problems related to learning costs and compatibilities with the existing systems. This paper provides a set of qualitative metrics which can be used to evaluate and compare different technologies, based on the most important conceptual objectives of database-driven information systems

    -ilities Tradespace and Affordability Project – Phase 3

    Get PDF
    One of the key elements of the SERC’s research strategy is transforming the practice of systems engineering and associated management practices – “SE and Management Transformation (SEMT).” The Grand Challenge goal for SEMT is to transform the DoD community’s current systems engineering and management methods, processes, and tools (MPTs) and practices away from sequential, single stovepipe system, hardware-first, document-driven, point- solution, acquisition-oriented approaches; and toward concurrent, portfolio and enterprise- oriented, hardware-software-human engineered, model-driven, set-based, full life cycle approaches.This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) under Contract H98230-08- D-0171 (Task Order 0031, RT 046).This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) under Contract H98230-08- D-0171 (Task Order 0031, RT 046)

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.
    corecore