30,264 research outputs found

    Applications of Finite Model Theory: Optimisation Problems, Hybrid Modal Logics and Games.

    Get PDF
    There exists an interesting relationships between two seemingly distinct fields: logic from the field of Model Theory, which deals with the truth of statements about discrete structures; and Computational Complexity, which deals with the classification of problems by how much of a particular computer resource is required in order to compute a solution. This relationship is known as Descriptive Complexity and it is the primary application of the tools from Model Theory when they are restricted to the finite; this restriction is commonly called Finite Model Theory. In this thesis, we investigate the extension of the results of Descriptive Complexity from classes of decision problems to classes of optimisation problems. When dealing with decision problems the natural mapping from true and false in logic to yes and no instances of a problem is used but when dealing with optimisation problems, other features of a logic need to be used. We investigate what these features are and provide results in the form of logical frameworks that can be used for describing optimisation problems in particular classes, building on the existing research into this area. Another application of Finite Model Theory that this thesis investigates is the relative expressiveness of various fragments of an extension of modal logic called hybrid modal logic. This is achieved through taking the Ehrenfeucht-Fraïssé game from Model Theory and modifying it so that it can be applied to hybrid modal logic. Then, by developing winning strategies for the players in the game, results are obtained that show strict hierarchies of expressiveness for fragments of hybrid modal logic that are generated by varying the quantifier depth and the number of proposition and nominal symbols available

    Global Optimization for Value Function Approximation

    Full text link
    Existing value function approximation methods have been successfully used in many applications, but they often lack useful a priori error bounds. We propose a new approximate bilinear programming formulation of value function approximation, which employs global optimization. The formulation provides strong a priori guarantees on both robust and expected policy loss by minimizing specific norms of the Bellman residual. Solving a bilinear program optimally is NP-hard, but this is unavoidable because the Bellman-residual minimization itself is NP-hard. We describe and analyze both optimal and approximate algorithms for solving bilinear programs. The analysis shows that this algorithm offers a convergent generalization of approximate policy iteration. We also briefly analyze the behavior of bilinear programming algorithms under incomplete samples. Finally, we demonstrate that the proposed approach can consistently minimize the Bellman residual on simple benchmark problems

    On Optimization Modulo Theories, MaxSMT and Sorting Networks

    Full text link
    Optimization Modulo Theories (OMT) is an extension of SMT which allows for finding models that optimize given objectives. (Partial weighted) MaxSMT --or equivalently OMT with Pseudo-Boolean objective functions, OMT+PB-- is a very-relevant strict subcase of OMT. We classify existing approaches for MaxSMT or OMT+PB in two groups: MaxSAT-based approaches exploit the efficiency of state-of-the-art MAXSAT solvers, but they are specific-purpose and not always applicable; OMT-based approaches are general-purpose, but they suffer from intrinsic inefficiencies on MaxSMT/OMT+PB problems. We identify a major source of such inefficiencies, and we address it by enhancing OMT by means of bidirectional sorting networks. We implemented this idea on top of the OptiMathSAT OMT solver. We run an extensive empirical evaluation on a variety of problems, comparing MaxSAT-based and OMT-based techniques, with and without sorting networks, implemented on top of OptiMathSAT and {\nu}Z. The results support the effectiveness of this idea, and provide interesting insights about the different approaches.Comment: 17 pages, submitted at Tacas 1

    A nonmonotone GRASP

    Get PDF
    A greedy randomized adaptive search procedure (GRASP) is an itera- tive multistart metaheuristic for difficult combinatorial optimization problems. Each GRASP iteration consists of two phases: a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Repeated applications of the con- struction procedure yields different starting solutions for the local search and the best overall solution is kept as the result. The GRASP local search applies iterative improvement until a locally optimal solution is found. During this phase, starting from the current solution an improving neighbor solution is accepted and considered as the new current solution. In this paper, we propose a variant of the GRASP framework that uses a new “nonmonotone” strategy to explore the neighborhood of the current solu- tion. We formally state the convergence of the nonmonotone local search to a locally optimal solution and illustrate the effectiveness of the resulting Nonmonotone GRASP on three classical hard combinatorial optimization problems: the maximum cut prob- lem (MAX-CUT), the weighted maximum satisfiability problem (MAX-SAT), and the quadratic assignment problem (QAP)
    corecore