56,716 research outputs found

    Branch-and-Prune Search Strategies for Numerical Constraint Solving

    Get PDF
    When solving numerical constraints such as nonlinear equations and inequalities, solvers often exploit pruning techniques, which remove redundant value combinations from the domains of variables, at pruning steps. To find the complete solution set, most of these solvers alternate the pruning steps with branching steps, which split each problem into subproblems. This forms the so-called branch-and-prune framework, well known among the approaches for solving numerical constraints. The basic branch-and-prune search strategy that uses domain bisections in place of the branching steps is called the bisection search. In general, the bisection search works well in case (i) the solutions are isolated, but it can be improved further in case (ii) there are continuums of solutions (this often occurs when inequalities are involved). In this paper, we propose a new branch-and-prune search strategy along with several variants, which not only allow yielding better branching decisions in the latter case, but also work as well as the bisection search does in the former case. These new search algorithms enable us to employ various pruning techniques in the construction of inner and outer approximations of the solution set. Our experiments show that these algorithms speed up the solving process often by one order of magnitude or more when solving problems with continuums of solutions, while keeping the same performance as the bisection search when the solutions are isolated.Comment: 43 pages, 11 figure

    Performance guarantees for model-based Approximate Dynamic Programming in continuous spaces

    Full text link
    We study both the value function and Q-function formulation of the Linear Programming approach to Approximate Dynamic Programming. The approach is model-based and optimizes over a restricted function space to approximate the value function or Q-function. Working in the discrete time, continuous space setting, we provide guarantees for the fitting error and online performance of the policy. In particular, the online performance guarantee is obtained by analyzing an iterated version of the greedy policy, and the fitting error guarantee by analyzing an iterated version of the Bellman inequality. These guarantees complement the existing bounds that appear in the literature. The Q-function formulation offers benefits, for example, in decentralized controller design, however it can lead to computationally demanding optimization problems. To alleviate this drawback, we provide a condition that simplifies the formulation, resulting in improved computational times.Comment: 18 pages, 5 figures, journal pape

    Optimal control in Markov decision processes via distributed optimization

    Full text link
    Optimal control synthesis in stochastic systems with respect to quantitative temporal logic constraints can be formulated as linear programming problems. However, centralized synthesis algorithms do not scale to many practical systems. To tackle this issue, we propose a decomposition-based distributed synthesis algorithm. By decomposing a large-scale stochastic system modeled as a Markov decision process into a collection of interacting sub-systems, the original control problem is formulated as a linear programming problem with a sparse constraint matrix, which can be solved through distributed optimization methods. Additionally, we propose a decomposition algorithm which automatically exploits, if exists, the modular structure in a given large-scale system. We illustrate the proposed methods through robotic motion planning examples.Comment: 8 pages, 5 figures, submitted to CDC 2015 conferenc

    Solving Factored MDPs with Hybrid State and Action Variables

    Full text link
    Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automated decision support systems. In this paper, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a new hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function by a linear combination of basis functions and optimize its weights by linear programming. We analyze both theoretical and computational aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems

    Open quantum systems are harder to track than open classical systems

    Full text link
    For a Markovian open quantum system it is possible, by continuously monitoring the environment, to know the stochastically evolving pure state of the system without altering the master equation. In general, even for a system with a finite Hilbert space dimension DD, the pure state trajectory will explore an infinite number of points in Hilbert space, meaning that the dimension KK of the classical memory required for the tracking is infinite. However, Karasik and Wiseman [Phys. Rev. Lett., 106(2):020406, 2011] showed that tracking of a qubit (D=2D=2) is always possible with a bit (K=2K=2), and gave a heuristic argument implying that a finite KK should be sufficient for any DD, although beyond D=2D=2 it would be necessary to have K>DK>D. Our paper is concerned with rigorously investigating the relationship between DD and KminK_{\rm min}, the smallest feasible KK. We confirm the long-standing conjecture of Karasik and Wiseman that, for generic systems with D>2D>2, Kmin>DK_{\rm min}>D, by a computational proof (via Hilbert Nullstellensatz certificates of infeasibility). That is, beyond D=2D=2, DD-dimensional open quantum systems are provably harder to track than DD-dimensional open classical systems. Moreover, we develop, and better justify, a new heuristic to guide our expectation of KminK_{\rm min} as a function of DD, taking into account the number LL of Lindblad operators as well as symmetries in the problem. The use of invariant subspace and Wigner symmetries makes it tractable to conduct a numerical search, using the method of polynomial homotopy continuation, to find finite physically realizable ensembles (as they are known) in D=3D=3. The results of this search support our heuristic. We thus have confidence in the most interesting feature of our heuristic: in the absence of symmetries, Kmin∼D2K_{\rm min} \sim D^2, implying a quadratic gap between the classical and quantum tracking problems.Comment: 35 pages, 3 figures, Accepted in Quantum Journal, minor change

    A Combined Approach for Constraints over Finite Domains and Arrays

    Full text link
    Arrays are ubiquitous in the context of software verification. However, effective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach combining both global symbolic reasoning and local consistency filtering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and finite-domain constraints over their elements and indexes. Our approach, named FDCC, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over finite domains. The tricky part of the work lies in the bi-directional communication mechanism between both solvers. We identify the significant information to share, and design ways to master the communication overhead. Experiments on random instances show that FDCC solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable

    A New Distributed DC-Programming Method and its Applications

    Full text link
    We propose a novel decomposition framework for the distributed optimization of Difference Convex (DC)-type nonseparable sum-utility functions subject to coupling convex constraints. A major contribution of the paper is to develop for the first time a class of (inexact) best-response-like algorithms with provable convergence, where a suitably convexified version of the original DC program is iteratively solved. The main feature of the proposed successive convex approximation method is its decomposability structure across the users, which leads naturally to distributed algorithms in the primal and/or dual domain. The proposed framework is applicable to a variety of multiuser DC problems in different areas, ranging from signal processing, to communications and networking. As a case study, in the second part of the paper we focus on two examples, namely: i) a novel resource allocation problem in the emerging area of cooperative physical layer security; ii) and the renowned sum-rate maximization of MIMO Cognitive Radio networks. Our contribution in this context is to devise a class of easy-to-implement distributed algorithms with provable convergence to stationary solution of such problems. Numerical results show that the proposed distributed schemes reach performance close to (and sometimes better than) that of centralized methods.Comment: submitted to IEEE Transactions on Signal Processin

    Monte-Carlo optimizations for resource allocation problems in stochastic network systems

    Full text link
    Real-world distributed systems and networks are often unreliable and subject to random failures of its components. Such a stochastic behavior affects adversely the complexity of optimization tasks performed routinely upon such systems, in particular, various resource allocation tasks. In this work we investigate and develop Monte Carlo solutions for a class of two-stage optimization problems in stochastic networks in which the expected value of resource allocations before and after stochastic failures needs to be optimized. The limitation of these problems is that their exact solutions are exponential in the number of unreliable network components: thus, exact methods do not scale-up well to large networks often seen in practice. We first prove that Monte Carlo optimization methods can overcome the exponential bottleneck of exact methods. Next we support our theoretical findings on resource allocation experiments and show a very good scale-up potential of the new methods to large stochastic networks.Comment: Appears in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI2003

    A Comparison of Logic Programming Approaches for Representation and Solving of Constraint Satisfaction Problems

    Full text link
    Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there are definite programs and constraint logic programs that compute a solution as an answer substitution to a query containing the variables of the constraint satisfaction problem. On the other hand there are approaches based on stable model semantics, abduction, and first-order logic model generation that compute solutions as models of some theory. This paper compares these different approaches from point of view of knowledge representation (how declarative are the programs) and from point of view of performance (how good are they at solving typical problems).Comment: 9 pages, 3 figures submitted to NMR 2000, April 9-11, Breckenridge, Colorad

    Propagation by Selective Initialization and Its Application to Numerical Constraint Satisfaction Problems

    Full text link
    Numerical analysis has no satisfactory method for the more realistic optimization models. However, with constraint programming one can compute a cover for the solution set to arbitrarily close approximation. Because the use of constraint propagation for composite arithmetic expressions is computationally expensive, consistency is computed with interval arithmetic. In this paper we present theorems that support, selective initialization, a simple modification of constraint propagation that allows composite arithmetic expressions to be handled efficiently
    • …
    corecore