703 research outputs found

    An Enhanced Features Extractor for a Portfolio of Constraint Solvers

    Get PDF
    Recent research has shown that a single arbitrarily efficient solver can be significantly outperformed by a portfolio of possibly slower on-average solvers. The solver selection is usually done by means of (un)supervised learning techniques which exploit features extracted from the problem specification. In this paper we present an useful and flexible framework that is able to extract an extensive set of features from a Constraint (Satisfaction/Optimization) Problem defined in possibly different modeling languages: MiniZinc, FlatZinc or XCSP. We also report some empirical results showing that the performances that can be obtained using these features are effective and competitive with state of the art CSP portfolio techniques

    Fast counting with tensor networks

    Full text link
    We introduce tensor network contraction algorithms for counting satisfying assignments of constraint satisfaction problems (#CSPs). We represent each arbitrary #CSP formula as a tensor network, whose full contraction yields the number of satisfying assignments of that formula, and use graph theoretical methods to determine favorable orders of contraction. We employ our heuristics for the solution of #P-hard counting boolean satisfiability (#SAT) problems, namely monotone #1-in-3SAT and #Cubic-Vertex-Cover, and find that they outperform state-of-the-art solvers by a significant margin.Comment: v2: added results for monotone #1-in-3SAT; published versio

    Engineering Crowdsourced Stream Processing Systems

    Full text link
    A crowdsourced stream processing system (CSP) is a system that incorporates crowdsourced tasks in the processing of a data stream. This can be seen as enabling crowdsourcing work to be applied on a sample of large-scale data at high speed, or equivalently, enabling stream processing to employ human intelligence. It also leads to a substantial expansion of the capabilities of data processing systems. Engineering a CSP system requires the combination of human and machine computation elements. From a general systems theory perspective, this means taking into account inherited as well as emerging properties from both these elements. In this paper, we position CSP systems within a broader taxonomy, outline a series of design principles and evaluation metrics, present an extensible framework for their design, and describe several design patterns. We showcase the capabilities of CSP systems by performing a case study that applies our proposed framework to the design and analysis of a real system (AIDR) that classifies social media messages during time-critical crisis events. Results show that compared to a pure stream processing system, AIDR can achieve a higher data classification accuracy, while compared to a pure crowdsourcing solution, the system makes better use of human workers by requiring much less manual work effort

    Constraint Propagation in Presence of Arrays

    Full text link
    We describe the use of array expressions as constraints, which represents a consequent generalisation of the "element" constraint. Constraint propagation for array constraints is studied theoretically, and for a set of domain reduction rules the local consistency they enforce, arc-consistency, is proved. An efficient algorithm is described that encapsulates the rule set and so inherits the capability to enforce arc-consistency from the rules.Comment: 10 pages. Accepted at the 6th Annual Workshop of the ERCIM Working Group on Constraints, 200

    Developing a labelled object-relational constraint database architecture for the projection operator

    Get PDF
    Current relational databases have been developed in order to improve the handling of stored data, however, there are some types of information that have to be analysed for which no suitable tools are available. These new types of data can be represented and treated as constraints, allowing a set of data to be represented through equations, inequations and Boolean combinations of both. To this end, constraint databases were defined and some prototypes were developed. Since there are aspects that can be improved, we propose a new architecture called labelled object-relational constraint database (LORCDB). This provides more expressiveness, since the database is adapted in order to support more types of data, instead of the data having to be adapted to the database. In this paper, the projection operator of SQL is extended so that it works with linear and polynomial constraints and variables of constraints. In order to optimize query evaluation efficiency, some strategies and algorithms have been used to obtain an efficient query plan. Most work on constraint databases uses spatiotemporal data as case studies. However, this paper proposes model-based diagnosis since it is a highly potential research area, and model-based diagnosis permits more complicated queries than spatiotemporal examples. Our architecture permits the queries over constraints to be defined over different sets of variables by using symbolic substitution and elimination of variables.Ministerio de Ciencia y Tecnología DPI2006-15476-C02-0

    When Interval Analysis Helps Inter-Block Backtracking

    Get PDF
    International audienceInter-block backtracking (IBB) computes all the solutions of sparse systems of non-linear equations over the reals. This algorithm, introduced in 1998 by Bliek et al., handles a system of equations previously decomposed into a set of (small) k × k sub-systems, called blocks. Partial solutions are computed in the different blocks and combined together to obtain the set of global solutions. When solutions inside blocks are computed with interval-based techniques, IBB can be viewed as a new interval-based algorithm for solving decomposed equation systems. Previous implementations used Ilog Solver and its IlcInterval library. The fact that this interval-based solver was more or less a black box implied several strong limitations. The new results described in this paper come from the integration of IBB with the interval-based library developed by the second author. This new library allows IBB to become reliable (no solution is lost) while still gaining several orders of magnitude w.r.t. solving the whole system. We compare several variants of IBB on a sample of benchmarks, which allows us to better understand the behavior of IBB. The main conclusion is that the use of an interval Newton operator inside blocks has the most positive impact on the robustness and performance of IBB. This modifies the influence of other features, such as intelligent backtracking and filtering strategies

    Biased landscapes for random Constraint Satisfaction Problems

    Full text link
    The typical complexity of Constraint Satisfaction Problems (CSPs) can be investigated by means of random ensembles of instances. The latter exhibit many threshold phenomena besides their satisfiability phase transition, in particular a clustering or dynamic phase transition (related to the tree reconstruction problem) at which their typical solutions shatter into disconnected components. In this paper we study the evolution of this phenomenon under a bias that breaks the uniformity among solutions of one CSP instance, concentrating on the bicoloring of k-uniform random hypergraphs. We show that for small k the clustering transition can be delayed in this way to higher density of constraints, and that this strategy has a positive impact on the performances of Simulated Annealing algorithms. We characterize the modest gain that can be expected in the large k limit from the simple implementation of the biasing idea studied here. This paper contains also a contribution of a more methodological nature, made of a review and extension of the methods to determine numerically the discontinuous dynamic transition threshold.Comment: 32 pages, 16 figure
    corecore