150,482 research outputs found

    Community structure in industrial SAT instances

    Get PDF
    Modern SAT solvers have experienced a remarkable progress on solving industrial instances. It is believed that most of these successful techniques exploit the underlying structure of industrial instances. Recently, there have been some attempts to analyze the structure of industrial SAT instances in terms of complex networks, with the aim of explaining the success of SAT solving techniques, and possibly improving them. In this paper, we study the community structure, or modularity, of industrial SAT instances. In a graph with clear community structure, or high modularity, we can find a partition of its nodes into communities such that most edges connect variables of the same community. Representing SAT instances as graphs, we show that most application benchmarks are characterized by a high modularity. On the contrary, random SAT instances are closer to the classical Erdös-Rényi random graph model, where no structure can be observed. We also analyze how this structure evolves by the effects of the execution of a CDCL SAT solver, and observe that new clauses learned by the solver during the search contribute to destroy the original structure of the formula. Motivated by this observation, we finally present an application that exploits the community structure to detect relevant learned clauses, and we show that detecting these clauses results in an improvement on the performance of the SAT solver. Empirically, we observe that this improves the performance of several SAT solvers on industrial SAT formulas, especially on satisfiable instances.Peer ReviewedPostprint (published version

    Analysis of Feature Models Using Alloy: A Survey

    Full text link
    Feature Models (FMs) are a mechanism to model variability among a family of closely related software products, i.e. a software product line (SPL). Analysis of FMs using formal methods can reveal defects in the specification such as inconsistencies that cause the product line to have no valid products. A popular framework used in research for FM analysis is Alloy, a light-weight formal modeling notation equipped with an efficient model finder. Several works in the literature have proposed different strategies to encode and analyze FMs using Alloy. However, there is little discussion on the relative merits of each proposal, making it difficult to select the most suitable encoding for a specific analysis need. In this paper, we describe and compare those strategies according to various criteria such as the expressivity of the FM notation or the efficiency of the analysis. This survey is the first comparative study of research targeted towards using Alloy for FM analysis. This review aims to identify all the best practices on the use of Alloy, as a part of a framework for the automated extraction and analysis of rich FMs from natural language requirement specifications.Comment: In Proceedings FMSPLE 2016, arXiv:1603.0857

    ASlib: A Benchmark Library for Algorithm Selection

    Full text link
    The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. Demonstrating the breadth and power of our platform, we describe a set of example experiments that build and evaluate algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.Comment: Accepted to be published in Artificial Intelligence Journa

    Using synchronous Boolean networks to model several phenomena of collective behavior

    Full text link
    In this paper, we propose an approach for modeling and analysis of a number of phenomena of collective behavior. By collectives we mean multi-agent systems that transition from one state to another at discrete moments of time. The behavior of a member of a collective (agent) is called conforming if the opinion of this agent at current time moment conforms to the opinion of some other agents at the previous time moment. We presume that at each moment of time every agent makes a decision by choosing from the set {0,1} (where 1-decision corresponds to action and 0-decision corresponds to inaction). In our approach we model collective behavior with synchronous Boolean networks. We presume that in a network there can be agents that act at every moment of time. Such agents are called instigators. Also there can be agents that never act. Such agents are called loyalists. Agents that are neither instigators nor loyalists are called simple agents. We study two combinatorial problems. The first problem is to find a disposition of instigators that in several time moments transforms a network from a state where a majority of simple agents are inactive to a state with a majority of active agents. The second problem is to find a disposition of loyalists that returns the network to a state with a majority of inactive agents. Similar problems are studied for networks in which simple agents demonstrate the contrary to conforming behavior that we call anticonforming. We obtained several theoretical results regarding the behavior of collectives of agents with conforming or anticonforming behavior. In computational experiments we solved the described problems for randomly generated networks with several hundred vertices. We reduced corresponding combinatorial problems to the Boolean satisfiability problem (SAT) and used modern SAT solvers to solve the instances obtained

    Computational Complexity for Physicists

    Full text link
    These lecture notes are an informal introduction to the theory of computational complexity and its links to quantum computing and statistical mechanics.Comment: references updated, reprint available from http://itp.nat.uni-magdeburg.de/~mertens/papers/complexity.shtm

    Community Structure in Industrial SAT Instances

    Get PDF
    Modern SAT solvers have experienced a remarkable progress on solving industrial instances. Most of the techniques have been developed after an intensive experimental process. It is believed that these techniques exploit the underlying structure of industrial instances. However, there are few works trying to exactly characterize the main features of this structure. The research community on complex networks has developed techniques of analysis and algorithms to study real-world graphs that can be used by the SAT community. Recently, there have been some attempts to analyze the structure of industrial SAT instances in terms of complex networks, with the aim of explaining the success of SAT solving techniques, and possibly improving them. In this paper, inspired by the results on complex networks, we study the community structure, or modularity, of industrial SAT instances. In a graph with clear community structure, or high modularity, we can find a partition of its nodes into communities such that most edges connect variables of the same community. In our analysis, we represent SAT instances as graphs, and we show that most application benchmarks are characterized by a high modularity. On the contrary, random SAT instances are closer to the classical Erd\"os-R\'enyi random graph model, where no structure can be observed. We also analyze how this structure evolves by the effects of the execution of a CDCL SAT solver. In particular, we use the community structure to detect that new clauses learned by the solver during the search contribute to destroy the original structure of the formula. This is, learned clauses tend to contain variables of distinct communities

    Automated generation of computationally hard feature models using evolutionary algorithms

    Get PDF
    This is the post-print version of the final paper published in Expert Systems with Applications. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2014 Elsevier B.V.A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.European Commission (FEDER), the Spanish Government and the Andalusian Government
    • …
    corecore