249,425 research outputs found

    On optimal heuristic randomized semidecision procedures, with application to proof complexity

    Get PDF
    The existence of a (p-)optimal propositional proof system is a major open question in (proof) complexity; many people conjecture that such systems do not exist. Krajicek and Pudlak (1989) show that this question is equivalent to the existence of an algorithm that is optimal on all propositional tautologies. Monroe (2009) recently gave a conjecture implying that such algorithm does not exist. We show that in the presence of errors such optimal algorithms do exist. The concept is motivated by the notion of heuristic algorithms. Namely, we allow the algorithm to claim a small number of false "theorems" (according to any samplable distribution on non-tautologies) and err with bounded probability on other inputs. Our result can also be viewed as the existence of an optimal proof system in a class of proof systems obtained by generalizing automatizable proof systems.Comment: 11 pages, accepted to STACS 201

    Information in propositional proofs and algorithmic proof search

    Full text link
    We study from the proof complexity perspective the (informal) proof search problem: Is there an optimal way to search for propositional proofs? We note that for any fixed proof system there exists a time-optimal proof search algorithm. Using classical proof complexity results about reflection principles we prove that a time-optimal proof search algorithm exists w.r.t. all proof systems iff a p-optimal proof system exists. To characterize precisely the time proof search algorithms need for individual formulas we introduce a new proof complexity measure based on algorithmic information concepts. In particular, to a proof system PP we attach {\bf information-efficiency function} iP(Ď„)i_P(\tau) assigning to a tautology a natural number, and we show that: - iP(Ď„)i_P(\tau) characterizes time any PP-proof search algorithm has to use on Ď„\tau and that for a fixed PP there is such an information-optimal algorithm, - a proof system is information-efficiency optimal iff it is p-optimal, - for non-automatizable systems PP there are formulas Ď„\tau with short proofs but having large information measure iP(Ď„)i_P(\tau). We isolate and motivate the problem to establish {\em unconditional} super-logarithmic lower bounds for iP(Ď„)i_P(\tau) where no super-polynomial size lower bounds are known. We also point out connections of the new measure with some topics in proof complexity other than proof search.Comment: Preliminary version February 202

    Euclidean algorithms are Gaussian

    Get PDF
    This study provides new results about the probabilistic behaviour of a class of Euclidean algorithms: the asymptotic distribution of a whole class of cost-parameters associated to these algorithms is normal. For the cost corresponding to the number of steps Hensley already has proved a Local Limit Theorem; we give a new proof, and extend his result to other euclidean algorithms and to a large class of digit costs, obtaining a faster, optimal, rate of convergence. The paper is based on the dynamical systems methodology, and the main tool is the transfer operator. In particular, we use recent results of Dolgopyat.Comment: fourth revised version - 2 figures - the strict convexity condition used has been clarifie

    Optimal Sparse Decision Trees

    Full text link
    Decision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980's. The problem that has plagued decision tree algorithms since their inception is their lack of optimality, or lack of guarantees of closeness to optimality: decision tree algorithms are often greedy or myopic, and sometimes produce unquestionably suboptimal models. Hardness of decision tree optimization is both a theoretical and practical obstacle, and even careful mathematical programming approaches have not been able to solve these problems efficiently. This work introduces the first practical algorithm for optimal decision trees for binary variables. The algorithm is a co-design of analytical bounds that reduce the search space and modern systems techniques, including data structures and a custom bit-vector library. Our experiments highlight advantages in scalability, speed, and proof of optimality.Comment: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canad

    Tensor Product Approach to Quantum Control

    Get PDF
    In this proof-of-concept paper we show that tensor product approach is efficient for control of large quantum systems, such as Heisenberg spin wires, which are essential for emerging quantum computing technologies. We compute optimal control sequences using GRAPE method, applying the recently developed tAMEn algorithm to calculate evolution of quantum states represented in the tensor train format to reduce storage. Using tensor product algorithms we can overcome the curse of dimensionality and compute the optimal control pulse for a 41 spin system on a single workstation with fully controlled accuracy and huge savings of computational time and memory. The use of tensor product algorithms opens new approaches for development of quantum computers with 50 to 100 qubits.Comment: To appear in Proc. IMSE 201

    Near-Optimal Sensor Scheduling for Batch State Estimation: Complexity, Algorithms, and Limits

    Full text link
    In this paper, we focus on batch state estimation for linear systems. This problem is important in applications such as environmental field estimation, robotic navigation, and target tracking. Its difficulty lies on that limited operational resources among the sensors, e.g., shared communication bandwidth or battery power, constrain the number of sensors that can be active at each measurement step. As a result, sensor scheduling algorithms must be employed. Notwithstanding, current sensor scheduling algorithms for batch state estimation scale poorly with the system size and the time horizon. In addition, current sensor scheduling algorithms for Kalman filtering, although they scale better, provide no performance guarantees or approximation bounds for the minimization of the batch state estimation error. In this paper, one of our main contributions is to provide an algorithm that enjoys both the estimation accuracy of the batch state scheduling algorithms and the low time complexity of the Kalman filtering scheduling algorithms. In particular: 1) our algorithm is near-optimal: it achieves a solution up to a multiplicative factor 1/2 from the optimal solution, and this factor is close to the best approximation factor 1/e one can achieve in polynomial time for this problem; 2) our algorithm has (polynomial) time complexity that is not only lower than that of the current algorithms for batch state estimation; it is also lower than, or similar to, that of the current algorithms for Kalman filtering. We achieve these results by proving two properties for our batch state estimation error metric, which quantifies the square error of the minimum variance linear estimator of the batch state vector: a) it is supermodular in the choice of the sensors; b) it has a sparsity pattern (it involves matrices that are block tri-diagonal) that facilitates its evaluation at each sensor set.Comment: Correction of typos in proof

    Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems

    Get PDF
    Randomized and deterministic algorithms for the problem of LQR optimal control via static-output-feedback (SOF) for discrete-time systems are suggested in this chapter. The randomized algorithm is based on a recently introduced randomized optimization method named the Ray-Shooting Method that efficiently solves the global minimization problem of continuous functions over compact non-convex unconnected regions. The randomized algorithm presented here has a proof of convergence in probability to the global optimum. The suggested deterministic algorithm is based on the gradient method and thus can be proved to converge to local optimum only. A comparison between the algorithms is provided as well as the performance of the hybrid algorithm

    Stochastic filtering via L2 projection on mixture manifolds with computer algorithms and numerical examples

    Get PDF
    We examine some differential geometric approaches to finding approximate solutions to the continuous time nonlinear filtering problem. Our primary focus is a new projection method for the optimal filter infinite dimensional Stochastic Partial Differential Equation (SPDE), based on the direct L2 metric and on a family of normal mixtures. We compare this method to earlier projection methods based on the Hellinger distance/Fisher metric and exponential families, and we compare the L2 mixture projection filter with a particle method with the same number of parameters, using the Levy metric. We prove that for a simple choice of the mixture manifold the L2 mixture projection filter coincides with a Galerkin method, whereas for more general mixture manifolds the equivalence does not hold and the L2 mixture filter is more general. We study particular systems that may illustrate the advantages of this new filter over other algorithms when comparing outputs with the optimal filter. We finally consider a specific software design that is suited for a numerically efficient implementation of this filter and provide numerical examples.Comment: Updated and expanded version published in the Journal reference below. Preprint updates: January 2016 (v3) added projection of Zakai Equation and difference with projection of Kushner-Stratonovich (section 4.1). August 2014 (v2) added Galerkin equivalence proof (Section 5) to the March 2013 (v1) versio
    • …
    corecore